Test Report: Docker_Linux_crio 22112

                    
                      236742b414df344dfb04283ee96fef673bd34cb2:2025-12-12:42745
                    
                

Test fail (28/415)

Order failed test Duration
38 TestAddons/serial/Volcano 0.24
44 TestAddons/parallel/Registry 12.58
45 TestAddons/parallel/RegistryCreds 0.41
46 TestAddons/parallel/Ingress 147.49
47 TestAddons/parallel/InspektorGadget 5.25
48 TestAddons/parallel/MetricsServer 5.32
50 TestAddons/parallel/CSI 38.68
51 TestAddons/parallel/Headlamp 2.42
52 TestAddons/parallel/CloudSpanner 6.27
53 TestAddons/parallel/LocalPath 8.08
54 TestAddons/parallel/NvidiaDevicePlugin 5.24
55 TestAddons/parallel/Yakd 6.26
56 TestAddons/parallel/AmdGpuDevicePlugin 6.24
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 2.3
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 2.3
294 TestJSONOutput/pause/Command 2.24
300 TestJSONOutput/unpause/Command 1.78
393 TestPause/serial/Pause 6.96
402 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.36
404 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.22
416 TestStartStop/group/old-k8s-version/serial/Pause 6.15
424 TestStartStop/group/no-preload/serial/Pause 7.02
427 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.07
432 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.24
436 TestStartStop/group/newest-cni/serial/Pause 5.74
440 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.27
461 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.35
467 TestStartStop/group/embed-certs/serial/Pause 7.23
x
+
TestAddons/serial/Volcano (0.24s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-410014 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-410014 addons disable volcano --alsologtostderr -v=1: exit status 11 (243.17204ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 19:31:10.967694   19039 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:31:10.968005   19039 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:31:10.968014   19039 out.go:374] Setting ErrFile to fd 2...
	I1212 19:31:10.968019   19039 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:31:10.968259   19039 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 19:31:10.968565   19039 mustload.go:66] Loading cluster: addons-410014
	I1212 19:31:10.968923   19039 config.go:182] Loaded profile config "addons-410014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:31:10.968947   19039 addons.go:622] checking whether the cluster is paused
	I1212 19:31:10.969084   19039 config.go:182] Loaded profile config "addons-410014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:31:10.969099   19039 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:31:10.969529   19039 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:31:10.988593   19039 ssh_runner.go:195] Run: systemctl --version
	I1212 19:31:10.988641   19039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:31:11.007892   19039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:31:11.102321   19039 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 19:31:11.102382   19039 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 19:31:11.130001   19039 cri.go:89] found id: "76571c6136b47298f32a0b1ec18cc99af7b389e2b5dbe495f1629d5e2703e8de"
	I1212 19:31:11.130041   19039 cri.go:89] found id: "7f1863e417224e030ce9a9d2e2f7136cbd19e03c826aa5403bad131c91dc9b93"
	I1212 19:31:11.130046   19039 cri.go:89] found id: "5cd7aec5d9bbe817f190772e730768355d87901e7fc80e8a00143cfda71f1d24"
	I1212 19:31:11.130049   19039 cri.go:89] found id: "9d3792f63458445175e10a14dca33e6972f1836499b2b34aec58251c1aae25fd"
	I1212 19:31:11.130057   19039 cri.go:89] found id: "e9263571afd91c7084801115b897309fd4e916d9659237639dc9fac1f242fef2"
	I1212 19:31:11.130062   19039 cri.go:89] found id: "cb6005d68d9a96016741ff117661915a95e429ed8aef7fc5832c7b9335aaf7c7"
	I1212 19:31:11.130067   19039 cri.go:89] found id: "24cd917601d91959bf6581b8065f18f069164f1f6db8c6dada400521a951cfd4"
	I1212 19:31:11.130072   19039 cri.go:89] found id: "3693e2f08cab47eb8561d548d6df8eed556b0c840ae77214117a4018d1ef6385"
	I1212 19:31:11.130077   19039 cri.go:89] found id: "d28f0bff28d4accb708ad0b723f3d8b457fae7a4905c32e5e61779549873d5fc"
	I1212 19:31:11.130091   19039 cri.go:89] found id: "355df816d72de65cb71104e428e2341e612b893f1f784bd69a95e8b4d25b2f15"
	I1212 19:31:11.130096   19039 cri.go:89] found id: "5378136ec2be93b30b5623db6080933a844f5c1ee7ab34812921d4ca12a2ec39"
	I1212 19:31:11.130101   19039 cri.go:89] found id: "448e73e6cab5275217215830ac10a0c4029a9e407db13c1c6352947a6c77b9f5"
	I1212 19:31:11.130106   19039 cri.go:89] found id: "f6b724bf055e80101eea67abe6cf1548be806ad8088ad9e482367bfb38836269"
	I1212 19:31:11.130110   19039 cri.go:89] found id: "d5470be0baf62349e9e352868eba4d0011d02e21667dbb5d805f2985be00ccdc"
	I1212 19:31:11.130113   19039 cri.go:89] found id: "54eea5a21a6fd70fe1e9d45f9a1a45437a77811fe92301178f0b99e163b4c669"
	I1212 19:31:11.130125   19039 cri.go:89] found id: "203522604a8b916bc3a950e033ca9bb281723c416d6c4583c96dbe77e514160d"
	I1212 19:31:11.130133   19039 cri.go:89] found id: "31bb87c8f5b440ef6f409980279d69a279ee8873f638c558c1455812526d42f1"
	I1212 19:31:11.130137   19039 cri.go:89] found id: "30de3e37db155bdf4e8d95d19b3266cd29f007ba5c66fdf65486edcc3d9fd8f3"
	I1212 19:31:11.130140   19039 cri.go:89] found id: "57cc761c4f0a4a4897b6a36f7a51c07f8f9b9d7923145c3005d02d41381917ca"
	I1212 19:31:11.130143   19039 cri.go:89] found id: "dea3cfc0d651ab9de775ab172dfcbe74be7ff2c5ac3aa39aa3a32c2ca09c508a"
	I1212 19:31:11.130146   19039 cri.go:89] found id: "d28712ec6c40914cb6c8e08632c1ae630a5201547999a8288f0c04c8dd6cee58"
	I1212 19:31:11.130149   19039 cri.go:89] found id: "22824f98cf9f933f80e85a004433640955a438da23b09eee368ae6a14f2c12f2"
	I1212 19:31:11.130153   19039 cri.go:89] found id: "624cd53ac7dff424e3ce412fbf38f28bdb8f59a04c4a399ca5d5f0b1ec4ee746"
	I1212 19:31:11.130157   19039 cri.go:89] found id: "6fb90e1241345ef6183ebbe960206cf20dfe7d2b12dc3ca243063ca515dd58b1"
	I1212 19:31:11.130165   19039 cri.go:89] found id: ""
	I1212 19:31:11.130219   19039 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 19:31:11.143902   19039 out.go:203] 
	W1212 19:31:11.145045   19039 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T19:31:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T19:31:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 19:31:11.145065   19039 out.go:285] * 
	* 
	W1212 19:31:11.147943   19039 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 19:31:11.149171   19039 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-410014 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.24s)

                                                
                                    
x
+
TestAddons/parallel/Registry (12.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 3.288865ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-vrszm" [bd0b2d8b-989d-4909-9db8-2993ac9f26f3] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002160262s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-5lrqf" [4f1b686d-49b1-4fe4-a2ac-d475882292bb] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003174042s
addons_test.go:394: (dbg) Run:  kubectl --context addons-410014 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-410014 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-410014 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.15037279s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-410014 ip
2025/12/12 19:31:32 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-410014 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-410014 addons disable registry --alsologtostderr -v=1: exit status 11 (233.111857ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 19:31:32.299574   21342 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:31:32.299866   21342 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:31:32.299876   21342 out.go:374] Setting ErrFile to fd 2...
	I1212 19:31:32.299881   21342 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:31:32.300126   21342 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 19:31:32.300418   21342 mustload.go:66] Loading cluster: addons-410014
	I1212 19:31:32.300770   21342 config.go:182] Loaded profile config "addons-410014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:31:32.300791   21342 addons.go:622] checking whether the cluster is paused
	I1212 19:31:32.300885   21342 config.go:182] Loaded profile config "addons-410014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:31:32.300904   21342 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:31:32.301302   21342 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:31:32.317920   21342 ssh_runner.go:195] Run: systemctl --version
	I1212 19:31:32.317964   21342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:31:32.334201   21342 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:31:32.425987   21342 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 19:31:32.426059   21342 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 19:31:32.454832   21342 cri.go:89] found id: "76571c6136b47298f32a0b1ec18cc99af7b389e2b5dbe495f1629d5e2703e8de"
	I1212 19:31:32.454859   21342 cri.go:89] found id: "7f1863e417224e030ce9a9d2e2f7136cbd19e03c826aa5403bad131c91dc9b93"
	I1212 19:31:32.454863   21342 cri.go:89] found id: "5cd7aec5d9bbe817f190772e730768355d87901e7fc80e8a00143cfda71f1d24"
	I1212 19:31:32.454866   21342 cri.go:89] found id: "9d3792f63458445175e10a14dca33e6972f1836499b2b34aec58251c1aae25fd"
	I1212 19:31:32.454869   21342 cri.go:89] found id: "e9263571afd91c7084801115b897309fd4e916d9659237639dc9fac1f242fef2"
	I1212 19:31:32.454873   21342 cri.go:89] found id: "cb6005d68d9a96016741ff117661915a95e429ed8aef7fc5832c7b9335aaf7c7"
	I1212 19:31:32.454876   21342 cri.go:89] found id: "24cd917601d91959bf6581b8065f18f069164f1f6db8c6dada400521a951cfd4"
	I1212 19:31:32.454878   21342 cri.go:89] found id: "3693e2f08cab47eb8561d548d6df8eed556b0c840ae77214117a4018d1ef6385"
	I1212 19:31:32.454881   21342 cri.go:89] found id: "d28f0bff28d4accb708ad0b723f3d8b457fae7a4905c32e5e61779549873d5fc"
	I1212 19:31:32.454896   21342 cri.go:89] found id: "355df816d72de65cb71104e428e2341e612b893f1f784bd69a95e8b4d25b2f15"
	I1212 19:31:32.454902   21342 cri.go:89] found id: "5378136ec2be93b30b5623db6080933a844f5c1ee7ab34812921d4ca12a2ec39"
	I1212 19:31:32.454905   21342 cri.go:89] found id: "448e73e6cab5275217215830ac10a0c4029a9e407db13c1c6352947a6c77b9f5"
	I1212 19:31:32.454908   21342 cri.go:89] found id: "f6b724bf055e80101eea67abe6cf1548be806ad8088ad9e482367bfb38836269"
	I1212 19:31:32.454911   21342 cri.go:89] found id: "d5470be0baf62349e9e352868eba4d0011d02e21667dbb5d805f2985be00ccdc"
	I1212 19:31:32.454914   21342 cri.go:89] found id: "54eea5a21a6fd70fe1e9d45f9a1a45437a77811fe92301178f0b99e163b4c669"
	I1212 19:31:32.454923   21342 cri.go:89] found id: "203522604a8b916bc3a950e033ca9bb281723c416d6c4583c96dbe77e514160d"
	I1212 19:31:32.454928   21342 cri.go:89] found id: "31bb87c8f5b440ef6f409980279d69a279ee8873f638c558c1455812526d42f1"
	I1212 19:31:32.454932   21342 cri.go:89] found id: "30de3e37db155bdf4e8d95d19b3266cd29f007ba5c66fdf65486edcc3d9fd8f3"
	I1212 19:31:32.454935   21342 cri.go:89] found id: "57cc761c4f0a4a4897b6a36f7a51c07f8f9b9d7923145c3005d02d41381917ca"
	I1212 19:31:32.454938   21342 cri.go:89] found id: "dea3cfc0d651ab9de775ab172dfcbe74be7ff2c5ac3aa39aa3a32c2ca09c508a"
	I1212 19:31:32.454941   21342 cri.go:89] found id: "d28712ec6c40914cb6c8e08632c1ae630a5201547999a8288f0c04c8dd6cee58"
	I1212 19:31:32.454943   21342 cri.go:89] found id: "22824f98cf9f933f80e85a004433640955a438da23b09eee368ae6a14f2c12f2"
	I1212 19:31:32.454946   21342 cri.go:89] found id: "624cd53ac7dff424e3ce412fbf38f28bdb8f59a04c4a399ca5d5f0b1ec4ee746"
	I1212 19:31:32.454949   21342 cri.go:89] found id: "6fb90e1241345ef6183ebbe960206cf20dfe7d2b12dc3ca243063ca515dd58b1"
	I1212 19:31:32.454952   21342 cri.go:89] found id: ""
	I1212 19:31:32.454988   21342 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 19:31:32.470932   21342 out.go:203] 
	W1212 19:31:32.472176   21342 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T19:31:32Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T19:31:32Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 19:31:32.472201   21342 out.go:285] * 
	* 
	W1212 19:31:32.475779   21342 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 19:31:32.476993   21342 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-410014 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (12.58s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.41s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 2.583825ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-410014
addons_test.go:334: (dbg) Run:  kubectl --context addons-410014 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-410014 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-410014 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (253.140644ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 19:31:25.436476   20441 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:31:25.436805   20441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:31:25.436818   20441 out.go:374] Setting ErrFile to fd 2...
	I1212 19:31:25.436824   20441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:31:25.437130   20441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 19:31:25.437482   20441 mustload.go:66] Loading cluster: addons-410014
	I1212 19:31:25.437916   20441 config.go:182] Loaded profile config "addons-410014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:31:25.437940   20441 addons.go:622] checking whether the cluster is paused
	I1212 19:31:25.438080   20441 config.go:182] Loaded profile config "addons-410014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:31:25.438100   20441 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:31:25.438645   20441 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:31:25.456653   20441 ssh_runner.go:195] Run: systemctl --version
	I1212 19:31:25.456711   20441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:31:25.477416   20441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:31:25.575318   20441 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 19:31:25.575389   20441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 19:31:25.606122   20441 cri.go:89] found id: "76571c6136b47298f32a0b1ec18cc99af7b389e2b5dbe495f1629d5e2703e8de"
	I1212 19:31:25.606154   20441 cri.go:89] found id: "7f1863e417224e030ce9a9d2e2f7136cbd19e03c826aa5403bad131c91dc9b93"
	I1212 19:31:25.606159   20441 cri.go:89] found id: "5cd7aec5d9bbe817f190772e730768355d87901e7fc80e8a00143cfda71f1d24"
	I1212 19:31:25.606164   20441 cri.go:89] found id: "9d3792f63458445175e10a14dca33e6972f1836499b2b34aec58251c1aae25fd"
	I1212 19:31:25.606168   20441 cri.go:89] found id: "e9263571afd91c7084801115b897309fd4e916d9659237639dc9fac1f242fef2"
	I1212 19:31:25.606173   20441 cri.go:89] found id: "cb6005d68d9a96016741ff117661915a95e429ed8aef7fc5832c7b9335aaf7c7"
	I1212 19:31:25.606178   20441 cri.go:89] found id: "24cd917601d91959bf6581b8065f18f069164f1f6db8c6dada400521a951cfd4"
	I1212 19:31:25.606183   20441 cri.go:89] found id: "3693e2f08cab47eb8561d548d6df8eed556b0c840ae77214117a4018d1ef6385"
	I1212 19:31:25.606188   20441 cri.go:89] found id: "d28f0bff28d4accb708ad0b723f3d8b457fae7a4905c32e5e61779549873d5fc"
	I1212 19:31:25.606203   20441 cri.go:89] found id: "355df816d72de65cb71104e428e2341e612b893f1f784bd69a95e8b4d25b2f15"
	I1212 19:31:25.606209   20441 cri.go:89] found id: "5378136ec2be93b30b5623db6080933a844f5c1ee7ab34812921d4ca12a2ec39"
	I1212 19:31:25.606217   20441 cri.go:89] found id: "448e73e6cab5275217215830ac10a0c4029a9e407db13c1c6352947a6c77b9f5"
	I1212 19:31:25.606222   20441 cri.go:89] found id: "f6b724bf055e80101eea67abe6cf1548be806ad8088ad9e482367bfb38836269"
	I1212 19:31:25.606229   20441 cri.go:89] found id: "d5470be0baf62349e9e352868eba4d0011d02e21667dbb5d805f2985be00ccdc"
	I1212 19:31:25.606234   20441 cri.go:89] found id: "54eea5a21a6fd70fe1e9d45f9a1a45437a77811fe92301178f0b99e163b4c669"
	I1212 19:31:25.606245   20441 cri.go:89] found id: "203522604a8b916bc3a950e033ca9bb281723c416d6c4583c96dbe77e514160d"
	I1212 19:31:25.606251   20441 cri.go:89] found id: "31bb87c8f5b440ef6f409980279d69a279ee8873f638c558c1455812526d42f1"
	I1212 19:31:25.606255   20441 cri.go:89] found id: "30de3e37db155bdf4e8d95d19b3266cd29f007ba5c66fdf65486edcc3d9fd8f3"
	I1212 19:31:25.606259   20441 cri.go:89] found id: "57cc761c4f0a4a4897b6a36f7a51c07f8f9b9d7923145c3005d02d41381917ca"
	I1212 19:31:25.606263   20441 cri.go:89] found id: "dea3cfc0d651ab9de775ab172dfcbe74be7ff2c5ac3aa39aa3a32c2ca09c508a"
	I1212 19:31:25.606268   20441 cri.go:89] found id: "d28712ec6c40914cb6c8e08632c1ae630a5201547999a8288f0c04c8dd6cee58"
	I1212 19:31:25.606295   20441 cri.go:89] found id: "22824f98cf9f933f80e85a004433640955a438da23b09eee368ae6a14f2c12f2"
	I1212 19:31:25.606303   20441 cri.go:89] found id: "624cd53ac7dff424e3ce412fbf38f28bdb8f59a04c4a399ca5d5f0b1ec4ee746"
	I1212 19:31:25.606307   20441 cri.go:89] found id: "6fb90e1241345ef6183ebbe960206cf20dfe7d2b12dc3ca243063ca515dd58b1"
	I1212 19:31:25.606312   20441 cri.go:89] found id: ""
	I1212 19:31:25.606356   20441 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 19:31:25.619620   20441 out.go:203] 
	W1212 19:31:25.620808   20441 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T19:31:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T19:31:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 19:31:25.620827   20441 out.go:285] * 
	* 
	W1212 19:31:25.624328   20441 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 19:31:25.625448   20441 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-410014 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.41s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (147.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-410014 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-410014 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-410014 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [261966ae-eab9-424d-ad02-778c89084278] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [261966ae-eab9-424d-ad02-778c89084278] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003398626s
I1212 19:31:33.588123    9254 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-410014 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-410014 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m16.119635174s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run:  kubectl --context addons-410014 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-410014 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-410014
helpers_test.go:244: (dbg) docker inspect addons-410014:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4a5536dc1575ed725317c56657f52a23cd45c97986b0c586e47505c63e2b1fd1",
	        "Created": "2025-12-12T19:29:01.077342227Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 11682,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T19:29:01.116785757Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ca69fb46d667873e297ff8975852d6be818eb624529e750e2b09ff0d3b0c367",
	        "ResolvConfPath": "/var/lib/docker/containers/4a5536dc1575ed725317c56657f52a23cd45c97986b0c586e47505c63e2b1fd1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4a5536dc1575ed725317c56657f52a23cd45c97986b0c586e47505c63e2b1fd1/hostname",
	        "HostsPath": "/var/lib/docker/containers/4a5536dc1575ed725317c56657f52a23cd45c97986b0c586e47505c63e2b1fd1/hosts",
	        "LogPath": "/var/lib/docker/containers/4a5536dc1575ed725317c56657f52a23cd45c97986b0c586e47505c63e2b1fd1/4a5536dc1575ed725317c56657f52a23cd45c97986b0c586e47505c63e2b1fd1-json.log",
	        "Name": "/addons-410014",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-410014:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-410014",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4a5536dc1575ed725317c56657f52a23cd45c97986b0c586e47505c63e2b1fd1",
	                "LowerDir": "/var/lib/docker/overlay2/e50b55a8266603824a6dd9a1cf4b6d2a694442c49034d88d55fbde0ec52bf8f9-init/diff:/var/lib/docker/overlay2/87d8f1d6be832394c20ec55eb084b3f4a084ca786856584bd9e90cdb45432d3c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e50b55a8266603824a6dd9a1cf4b6d2a694442c49034d88d55fbde0ec52bf8f9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e50b55a8266603824a6dd9a1cf4b6d2a694442c49034d88d55fbde0ec52bf8f9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e50b55a8266603824a6dd9a1cf4b6d2a694442c49034d88d55fbde0ec52bf8f9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-410014",
	                "Source": "/var/lib/docker/volumes/addons-410014/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-410014",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-410014",
	                "name.minikube.sigs.k8s.io": "addons-410014",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "81995d9c28c8d1f7f8986d14bf40fa0588f8033c648b03b6ed26d2c9cf70e2e0",
	            "SandboxKey": "/var/run/docker/netns/81995d9c28c8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-410014": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "adb88a589ecdd26a7a3a0a28470b93010384464bc8b7cf07d4fddcf94860e84f",
	                    "EndpointID": "88b50d26f71269edfeee1b207e9038fd84bf601c1dee180480b698d623af9f8f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "0a:c8:45:73:a5:91",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-410014",
	                        "4a5536dc1575"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-410014 -n addons-410014
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-410014 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-410014 logs -n 25: (1.09119016s)
helpers_test.go:261: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-608278 --alsologtostderr --binary-mirror http://127.0.0.1:36999 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-608278 │ jenkins │ v1.37.0 │ 12 Dec 25 19:28 UTC │                     │
	│ delete  │ -p binary-mirror-608278                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-608278 │ jenkins │ v1.37.0 │ 12 Dec 25 19:28 UTC │ 12 Dec 25 19:28 UTC │
	│ addons  │ enable dashboard -p addons-410014                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-410014        │ jenkins │ v1.37.0 │ 12 Dec 25 19:28 UTC │                     │
	│ addons  │ disable dashboard -p addons-410014                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-410014        │ jenkins │ v1.37.0 │ 12 Dec 25 19:28 UTC │                     │
	│ start   │ -p addons-410014 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-410014        │ jenkins │ v1.37.0 │ 12 Dec 25 19:28 UTC │ 12 Dec 25 19:31 UTC │
	│ addons  │ addons-410014 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-410014        │ jenkins │ v1.37.0 │ 12 Dec 25 19:31 UTC │                     │
	│ addons  │ addons-410014 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-410014        │ jenkins │ v1.37.0 │ 12 Dec 25 19:31 UTC │                     │
	│ addons  │ enable headlamp -p addons-410014 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-410014        │ jenkins │ v1.37.0 │ 12 Dec 25 19:31 UTC │                     │
	│ addons  │ addons-410014 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-410014        │ jenkins │ v1.37.0 │ 12 Dec 25 19:31 UTC │                     │
	│ addons  │ addons-410014 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-410014        │ jenkins │ v1.37.0 │ 12 Dec 25 19:31 UTC │                     │
	│ addons  │ addons-410014 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-410014        │ jenkins │ v1.37.0 │ 12 Dec 25 19:31 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-410014                                                                                                                                                                                                                                                                                                                                                                                           │ addons-410014        │ jenkins │ v1.37.0 │ 12 Dec 25 19:31 UTC │ 12 Dec 25 19:31 UTC │
	│ addons  │ addons-410014 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-410014        │ jenkins │ v1.37.0 │ 12 Dec 25 19:31 UTC │                     │
	│ addons  │ addons-410014 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-410014        │ jenkins │ v1.37.0 │ 12 Dec 25 19:31 UTC │                     │
	│ ip      │ addons-410014 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-410014        │ jenkins │ v1.37.0 │ 12 Dec 25 19:31 UTC │ 12 Dec 25 19:31 UTC │
	│ addons  │ addons-410014 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-410014        │ jenkins │ v1.37.0 │ 12 Dec 25 19:31 UTC │                     │
	│ ssh     │ addons-410014 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-410014        │ jenkins │ v1.37.0 │ 12 Dec 25 19:31 UTC │                     │
	│ addons  │ addons-410014 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-410014        │ jenkins │ v1.37.0 │ 12 Dec 25 19:31 UTC │                     │
	│ addons  │ addons-410014 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-410014        │ jenkins │ v1.37.0 │ 12 Dec 25 19:31 UTC │                     │
	│ ssh     │ addons-410014 ssh cat /opt/local-path-provisioner/pvc-e79eb42e-1321-4a09-9867-49823fdf7fbb_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-410014        │ jenkins │ v1.37.0 │ 12 Dec 25 19:31 UTC │ 12 Dec 25 19:31 UTC │
	│ addons  │ addons-410014 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-410014        │ jenkins │ v1.37.0 │ 12 Dec 25 19:31 UTC │                     │
	│ addons  │ addons-410014 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-410014        │ jenkins │ v1.37.0 │ 12 Dec 25 19:31 UTC │                     │
	│ addons  │ addons-410014 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-410014        │ jenkins │ v1.37.0 │ 12 Dec 25 19:32 UTC │                     │
	│ addons  │ addons-410014 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-410014        │ jenkins │ v1.37.0 │ 12 Dec 25 19:32 UTC │                     │
	│ ip      │ addons-410014 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-410014        │ jenkins │ v1.37.0 │ 12 Dec 25 19:33 UTC │ 12 Dec 25 19:33 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 19:28:38
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 19:28:38.222893   11017 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:28:38.222974   11017 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:28:38.222978   11017 out.go:374] Setting ErrFile to fd 2...
	I1212 19:28:38.222982   11017 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:28:38.223152   11017 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 19:28:38.223641   11017 out.go:368] Setting JSON to false
	I1212 19:28:38.224394   11017 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":665,"bootTime":1765567053,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 19:28:38.224439   11017 start.go:143] virtualization: kvm guest
	I1212 19:28:38.226152   11017 out.go:179] * [addons-410014] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 19:28:38.227243   11017 notify.go:221] Checking for updates...
	I1212 19:28:38.227268   11017 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 19:28:38.228322   11017 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 19:28:38.229396   11017 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 19:28:38.230412   11017 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-5703/.minikube
	I1212 19:28:38.231423   11017 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 19:28:38.232356   11017 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 19:28:38.233477   11017 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 19:28:38.254216   11017 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1212 19:28:38.254362   11017 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:28:38.304136   11017 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-12 19:28:38.295507384 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 19:28:38.304230   11017 docker.go:319] overlay module found
	I1212 19:28:38.305647   11017 out.go:179] * Using the docker driver based on user configuration
	I1212 19:28:38.306799   11017 start.go:309] selected driver: docker
	I1212 19:28:38.306810   11017 start.go:927] validating driver "docker" against <nil>
	I1212 19:28:38.306820   11017 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 19:28:38.307336   11017 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:28:38.356845   11017 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-12 19:28:38.347930449 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 19:28:38.356975   11017 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 19:28:38.357193   11017 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 19:28:38.358539   11017 out.go:179] * Using Docker driver with root privileges
	I1212 19:28:38.359624   11017 cni.go:84] Creating CNI manager for ""
	I1212 19:28:38.359679   11017 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 19:28:38.359689   11017 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 19:28:38.359758   11017 start.go:353] cluster config:
	{Name:addons-410014 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-410014 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1212 19:28:38.360853   11017 out.go:179] * Starting "addons-410014" primary control-plane node in "addons-410014" cluster
	I1212 19:28:38.361900   11017 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 19:28:38.362829   11017 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 19:28:38.363835   11017 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 19:28:38.363859   11017 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1212 19:28:38.363867   11017 cache.go:65] Caching tarball of preloaded images
	I1212 19:28:38.363869   11017 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 19:28:38.363968   11017 preload.go:238] Found /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 19:28:38.363981   11017 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 19:28:38.364318   11017 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/config.json ...
	I1212 19:28:38.364343   11017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/config.json: {Name:mk5485d62eb36051e12a4afe212d8d5f2a720327 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:28:38.380972   11017 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 to local cache
	I1212 19:28:38.381084   11017 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local cache directory
	I1212 19:28:38.381100   11017 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local cache directory, skipping pull
	I1212 19:28:38.381104   11017 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in cache, skipping pull
	I1212 19:28:38.381114   11017 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 as a tarball
	I1212 19:28:38.381121   11017 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 from local cache
	I1212 19:28:50.846232   11017 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 from cached tarball
	I1212 19:28:50.846266   11017 cache.go:243] Successfully downloaded all kic artifacts
	I1212 19:28:50.846322   11017 start.go:360] acquireMachinesLock for addons-410014: {Name:mka5adb08d7923b35d736bb0962856278eccf142 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 19:28:50.846412   11017 start.go:364] duration metric: took 69.374µs to acquireMachinesLock for "addons-410014"
	I1212 19:28:50.846445   11017 start.go:93] Provisioning new machine with config: &{Name:addons-410014 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-410014 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 19:28:50.846500   11017 start.go:125] createHost starting for "" (driver="docker")
	I1212 19:28:50.848056   11017 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1212 19:28:50.848291   11017 start.go:159] libmachine.API.Create for "addons-410014" (driver="docker")
	I1212 19:28:50.848326   11017 client.go:173] LocalClient.Create starting
	I1212 19:28:50.848453   11017 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem
	I1212 19:28:51.080759   11017 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem
	I1212 19:28:51.149227   11017 cli_runner.go:164] Run: docker network inspect addons-410014 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 19:28:51.166935   11017 cli_runner.go:211] docker network inspect addons-410014 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 19:28:51.167003   11017 network_create.go:284] running [docker network inspect addons-410014] to gather additional debugging logs...
	I1212 19:28:51.167022   11017 cli_runner.go:164] Run: docker network inspect addons-410014
	W1212 19:28:51.181871   11017 cli_runner.go:211] docker network inspect addons-410014 returned with exit code 1
	I1212 19:28:51.181903   11017 network_create.go:287] error running [docker network inspect addons-410014]: docker network inspect addons-410014: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-410014 not found
	I1212 19:28:51.181920   11017 network_create.go:289] output of [docker network inspect addons-410014]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-410014 not found
	
	** /stderr **
	I1212 19:28:51.182052   11017 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 19:28:51.197848   11017 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001dc4170}
	I1212 19:28:51.197883   11017 network_create.go:124] attempt to create docker network addons-410014 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1212 19:28:51.197945   11017 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-410014 addons-410014
	I1212 19:28:51.243691   11017 network_create.go:108] docker network addons-410014 192.168.49.0/24 created
	I1212 19:28:51.243719   11017 kic.go:121] calculated static IP "192.168.49.2" for the "addons-410014" container
	I1212 19:28:51.243767   11017 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 19:28:51.257741   11017 cli_runner.go:164] Run: docker volume create addons-410014 --label name.minikube.sigs.k8s.io=addons-410014 --label created_by.minikube.sigs.k8s.io=true
	I1212 19:28:51.273433   11017 oci.go:103] Successfully created a docker volume addons-410014
	I1212 19:28:51.273491   11017 cli_runner.go:164] Run: docker run --rm --name addons-410014-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-410014 --entrypoint /usr/bin/test -v addons-410014:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -d /var/lib
	I1212 19:28:57.316601   11017 cli_runner.go:217] Completed: docker run --rm --name addons-410014-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-410014 --entrypoint /usr/bin/test -v addons-410014:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -d /var/lib: (6.043060061s)
	I1212 19:28:57.316638   11017 oci.go:107] Successfully prepared a docker volume addons-410014
	I1212 19:28:57.316700   11017 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 19:28:57.316712   11017 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 19:28:57.316756   11017 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-410014:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 19:29:01.011891   11017 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-410014:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -I lz4 -xf /preloaded.tar -C /extractDir: (3.695090167s)
	I1212 19:29:01.011924   11017 kic.go:203] duration metric: took 3.695207303s to extract preloaded images to volume ...
	W1212 19:29:01.012033   11017 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1212 19:29:01.012079   11017 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1212 19:29:01.012129   11017 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 19:29:01.062395   11017 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-410014 --name addons-410014 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-410014 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-410014 --network addons-410014 --ip 192.168.49.2 --volume addons-410014:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138
	I1212 19:29:01.342237   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Running}}
	I1212 19:29:01.360338   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:29:01.377559   11017 cli_runner.go:164] Run: docker exec addons-410014 stat /var/lib/dpkg/alternatives/iptables
	I1212 19:29:01.422474   11017 oci.go:144] the created container "addons-410014" has a running status.
	I1212 19:29:01.422503   11017 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa...
	I1212 19:29:01.587621   11017 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 19:29:01.613825   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:29:01.632734   11017 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 19:29:01.632759   11017 kic_runner.go:114] Args: [docker exec --privileged addons-410014 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 19:29:01.689906   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:29:01.710446   11017 machine.go:94] provisionDockerMachine start ...
	I1212 19:29:01.710515   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:01.730539   11017 main.go:143] libmachine: Using SSH client type: native
	I1212 19:29:01.730751   11017 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1212 19:29:01.730763   11017 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 19:29:01.859687   11017 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-410014
	
	I1212 19:29:01.859726   11017 ubuntu.go:182] provisioning hostname "addons-410014"
	I1212 19:29:01.859800   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:01.877739   11017 main.go:143] libmachine: Using SSH client type: native
	I1212 19:29:01.877943   11017 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1212 19:29:01.877956   11017 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-410014 && echo "addons-410014" | sudo tee /etc/hostname
	I1212 19:29:02.015581   11017 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-410014
	
	I1212 19:29:02.015667   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:02.033975   11017 main.go:143] libmachine: Using SSH client type: native
	I1212 19:29:02.034202   11017 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1212 19:29:02.034226   11017 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-410014' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-410014/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-410014' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 19:29:02.159981   11017 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 19:29:02.160013   11017 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-5703/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-5703/.minikube}
	I1212 19:29:02.160032   11017 ubuntu.go:190] setting up certificates
	I1212 19:29:02.160040   11017 provision.go:84] configureAuth start
	I1212 19:29:02.160088   11017 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-410014
	I1212 19:29:02.176068   11017 provision.go:143] copyHostCerts
	I1212 19:29:02.176125   11017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-5703/.minikube/cert.pem (1123 bytes)
	I1212 19:29:02.176240   11017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-5703/.minikube/key.pem (1679 bytes)
	I1212 19:29:02.176328   11017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-5703/.minikube/ca.pem (1078 bytes)
	I1212 19:29:02.176385   11017 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-5703/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca-key.pem org=jenkins.addons-410014 san=[127.0.0.1 192.168.49.2 addons-410014 localhost minikube]
	I1212 19:29:02.222492   11017 provision.go:177] copyRemoteCerts
	I1212 19:29:02.222533   11017 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 19:29:02.222568   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:02.238789   11017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:29:02.330183   11017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1212 19:29:02.347394   11017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 19:29:02.362996   11017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 19:29:02.378425   11017 provision.go:87] duration metric: took 218.375986ms to configureAuth
	I1212 19:29:02.378445   11017 ubuntu.go:206] setting minikube options for container-runtime
	I1212 19:29:02.378587   11017 config.go:182] Loaded profile config "addons-410014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:29:02.378681   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:02.395267   11017 main.go:143] libmachine: Using SSH client type: native
	I1212 19:29:02.395471   11017 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1212 19:29:02.395489   11017 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 19:29:02.653198   11017 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 19:29:02.653224   11017 machine.go:97] duration metric: took 942.75767ms to provisionDockerMachine
	I1212 19:29:02.653237   11017 client.go:176] duration metric: took 11.804899719s to LocalClient.Create
	I1212 19:29:02.653253   11017 start.go:167] duration metric: took 11.804965624s to libmachine.API.Create "addons-410014"
	I1212 19:29:02.653260   11017 start.go:293] postStartSetup for "addons-410014" (driver="docker")
	I1212 19:29:02.653268   11017 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 19:29:02.653344   11017 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 19:29:02.653378   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:02.670250   11017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:29:02.763629   11017 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 19:29:02.766703   11017 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 19:29:02.766730   11017 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 19:29:02.766740   11017 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-5703/.minikube/addons for local assets ...
	I1212 19:29:02.766792   11017 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-5703/.minikube/files for local assets ...
	I1212 19:29:02.766815   11017 start.go:296] duration metric: took 113.550076ms for postStartSetup
	I1212 19:29:02.767070   11017 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-410014
	I1212 19:29:02.783623   11017 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/config.json ...
	I1212 19:29:02.783857   11017 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 19:29:02.783901   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:02.799441   11017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:29:02.888492   11017 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 19:29:02.892397   11017 start.go:128] duration metric: took 12.045882809s to createHost
	I1212 19:29:02.892415   11017 start.go:83] releasing machines lock for "addons-410014", held for 12.045992155s
	I1212 19:29:02.892498   11017 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-410014
	I1212 19:29:02.908852   11017 ssh_runner.go:195] Run: cat /version.json
	I1212 19:29:02.908890   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:02.908924   11017 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 19:29:02.908997   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:02.925550   11017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:29:02.926551   11017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:29:03.069675   11017 ssh_runner.go:195] Run: systemctl --version
	I1212 19:29:03.075345   11017 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 19:29:03.106914   11017 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 19:29:03.110969   11017 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 19:29:03.111017   11017 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 19:29:03.133575   11017 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 19:29:03.133595   11017 start.go:496] detecting cgroup driver to use...
	I1212 19:29:03.133622   11017 detect.go:190] detected "systemd" cgroup driver on host os
	I1212 19:29:03.133653   11017 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 19:29:03.147816   11017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 19:29:03.158439   11017 docker.go:218] disabling cri-docker service (if available) ...
	I1212 19:29:03.158475   11017 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 19:29:03.173117   11017 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 19:29:03.188357   11017 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 19:29:03.264812   11017 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 19:29:03.346147   11017 docker.go:234] disabling docker service ...
	I1212 19:29:03.346217   11017 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 19:29:03.362651   11017 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 19:29:03.373576   11017 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 19:29:03.449061   11017 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 19:29:03.525751   11017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 19:29:03.536975   11017 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 19:29:03.549437   11017 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 19:29:03.549488   11017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 19:29:03.558487   11017 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1212 19:29:03.558538   11017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 19:29:03.566209   11017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 19:29:03.573701   11017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 19:29:03.581134   11017 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 19:29:03.588114   11017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 19:29:03.595709   11017 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 19:29:03.607741   11017 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 19:29:03.615386   11017 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 19:29:03.621800   11017 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 19:29:03.621858   11017 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 19:29:03.632586   11017 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 19:29:03.639058   11017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:29:03.714883   11017 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 19:29:03.837028   11017 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 19:29:03.837091   11017 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 19:29:03.840671   11017 start.go:564] Will wait 60s for crictl version
	I1212 19:29:03.840721   11017 ssh_runner.go:195] Run: which crictl
	I1212 19:29:03.843944   11017 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 19:29:03.865880   11017 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 19:29:03.865969   11017 ssh_runner.go:195] Run: crio --version
	I1212 19:29:03.890965   11017 ssh_runner.go:195] Run: crio --version
	I1212 19:29:03.917166   11017 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 19:29:03.918244   11017 cli_runner.go:164] Run: docker network inspect addons-410014 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 19:29:03.934208   11017 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 19:29:03.937739   11017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 19:29:03.947012   11017 kubeadm.go:884] updating cluster {Name:addons-410014 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-410014 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 19:29:03.947116   11017 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 19:29:03.947165   11017 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 19:29:03.975793   11017 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 19:29:03.975809   11017 crio.go:433] Images already preloaded, skipping extraction
	I1212 19:29:03.975843   11017 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 19:29:03.997820   11017 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 19:29:03.997838   11017 cache_images.go:86] Images are preloaded, skipping loading
	I1212 19:29:03.997845   11017 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1212 19:29:03.997925   11017 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-410014 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-410014 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 19:29:03.997983   11017 ssh_runner.go:195] Run: crio config
	I1212 19:29:04.040728   11017 cni.go:84] Creating CNI manager for ""
	I1212 19:29:04.040748   11017 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 19:29:04.040765   11017 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 19:29:04.040784   11017 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-410014 NodeName:addons-410014 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 19:29:04.040882   11017 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-410014"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 19:29:04.040937   11017 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 19:29:04.048214   11017 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 19:29:04.048255   11017 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 19:29:04.055251   11017 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1212 19:29:04.066552   11017 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 19:29:04.079977   11017 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1212 19:29:04.090986   11017 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 19:29:04.094051   11017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 19:29:04.102741   11017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:29:04.178644   11017 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 19:29:04.200109   11017 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014 for IP: 192.168.49.2
	I1212 19:29:04.200129   11017 certs.go:195] generating shared ca certs ...
	I1212 19:29:04.200151   11017 certs.go:227] acquiring lock for ca certs: {Name:mk2b2b547b64ae18c808f2dedd3d3cb8fa4a59ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:29:04.200264   11017 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-5703/.minikube/ca.key
	I1212 19:29:04.300233   11017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-5703/.minikube/ca.crt ...
	I1212 19:29:04.300257   11017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/ca.crt: {Name:mk811712a324d18afa5f7a10469f88bc4b90d914 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:29:04.300436   11017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-5703/.minikube/ca.key ...
	I1212 19:29:04.300452   11017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/ca.key: {Name:mk97a8f04d69b14c722e80dd1116f301709afb08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:29:04.300557   11017 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.key
	I1212 19:29:04.339804   11017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.crt ...
	I1212 19:29:04.339822   11017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.crt: {Name:mkf7a019fbaaaa81eec129dd4b7b743eec9e9e91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:29:04.339958   11017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.key ...
	I1212 19:29:04.339970   11017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.key: {Name:mk9371d9666838d118eac78114fa34de285870e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:29:04.340061   11017 certs.go:257] generating profile certs ...
	I1212 19:29:04.340112   11017 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/client.key
	I1212 19:29:04.340125   11017 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/client.crt with IP's: []
	I1212 19:29:04.523303   11017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/client.crt ...
	I1212 19:29:04.523323   11017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/client.crt: {Name:mkbe12ab5afb981d7a65696fbfae2b599f08d7cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:29:04.523472   11017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/client.key ...
	I1212 19:29:04.523485   11017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/client.key: {Name:mkda5c082a9613c615d115541247dd4c7901992d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:29:04.523578   11017 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/apiserver.key.4c29363d
	I1212 19:29:04.523602   11017 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/apiserver.crt.4c29363d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1212 19:29:04.609307   11017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/apiserver.crt.4c29363d ...
	I1212 19:29:04.609325   11017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/apiserver.crt.4c29363d: {Name:mk2db75e8d4509f0173300ca92c7ac1b67562c26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:29:04.609460   11017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/apiserver.key.4c29363d ...
	I1212 19:29:04.609475   11017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/apiserver.key.4c29363d: {Name:mk23ffbaeeebc87c7c135375b55ae863856538f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:29:04.609569   11017 certs.go:382] copying /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/apiserver.crt.4c29363d -> /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/apiserver.crt
	I1212 19:29:04.609659   11017 certs.go:386] copying /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/apiserver.key.4c29363d -> /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/apiserver.key
	I1212 19:29:04.609712   11017 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/proxy-client.key
	I1212 19:29:04.609729   11017 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/proxy-client.crt with IP's: []
	I1212 19:29:04.694811   11017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/proxy-client.crt ...
	I1212 19:29:04.694827   11017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/proxy-client.crt: {Name:mkfe272bb22fc96b67cdbcf6423083ea3ed13521 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:29:04.694967   11017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/proxy-client.key ...
	I1212 19:29:04.694979   11017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/proxy-client.key: {Name:mk5f850397c70ca2dd135637b7b928ad321718df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:29:04.695161   11017 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 19:29:04.695194   11017 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem (1078 bytes)
	I1212 19:29:04.695219   11017 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem (1123 bytes)
	I1212 19:29:04.695242   11017 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/key.pem (1679 bytes)
	I1212 19:29:04.695762   11017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 19:29:04.712720   11017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 19:29:04.728337   11017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 19:29:04.743633   11017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 19:29:04.758998   11017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1212 19:29:04.774347   11017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 19:29:04.789614   11017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 19:29:04.804782   11017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 19:29:04.820034   11017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 19:29:04.837183   11017 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 19:29:04.848161   11017 ssh_runner.go:195] Run: openssl version
	I1212 19:29:04.853618   11017 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:29:04.860039   11017 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 19:29:04.868755   11017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:29:04.871955   11017 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:29:04.871994   11017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:29:04.905601   11017 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 19:29:04.912003   11017 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 19:29:04.918524   11017 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 19:29:04.921579   11017 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 19:29:04.921628   11017 kubeadm.go:401] StartCluster: {Name:addons-410014 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-410014 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:29:04.921705   11017 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 19:29:04.921767   11017 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 19:29:04.945729   11017 cri.go:89] found id: ""
	I1212 19:29:04.945781   11017 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 19:29:04.952598   11017 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 19:29:04.959503   11017 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 19:29:04.959535   11017 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 19:29:04.966550   11017 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 19:29:04.966565   11017 kubeadm.go:158] found existing configuration files:
	
	I1212 19:29:04.966594   11017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 19:29:04.973250   11017 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 19:29:04.973307   11017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 19:29:04.979771   11017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 19:29:04.986319   11017 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 19:29:04.986350   11017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 19:29:04.992602   11017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 19:29:04.999190   11017 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 19:29:04.999226   11017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 19:29:05.005582   11017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 19:29:05.012408   11017 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 19:29:05.012453   11017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 19:29:05.018913   11017 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 19:29:05.053204   11017 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1212 19:29:05.053326   11017 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 19:29:05.070775   11017 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 19:29:05.070837   11017 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1212 19:29:05.070878   11017 kubeadm.go:319] OS: Linux
	I1212 19:29:05.070925   11017 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 19:29:05.070983   11017 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 19:29:05.071080   11017 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 19:29:05.071168   11017 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 19:29:05.071247   11017 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 19:29:05.071328   11017 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 19:29:05.071407   11017 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 19:29:05.071472   11017 kubeadm.go:319] CGROUPS_IO: enabled
	I1212 19:29:05.121614   11017 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 19:29:05.121786   11017 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 19:29:05.121945   11017 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 19:29:05.128736   11017 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 19:29:05.130615   11017 out.go:252]   - Generating certificates and keys ...
	I1212 19:29:05.130708   11017 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 19:29:05.130806   11017 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 19:29:05.476018   11017 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 19:29:05.637824   11017 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 19:29:05.879930   11017 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 19:29:06.055138   11017 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 19:29:06.192776   11017 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 19:29:06.192916   11017 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-410014 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1212 19:29:06.458749   11017 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 19:29:06.458933   11017 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-410014 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1212 19:29:06.593624   11017 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 19:29:06.723088   11017 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 19:29:06.786106   11017 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 19:29:06.786209   11017 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 19:29:06.829336   11017 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 19:29:06.968266   11017 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 19:29:07.216802   11017 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 19:29:07.520766   11017 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 19:29:07.842883   11017 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 19:29:07.843361   11017 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 19:29:07.847080   11017 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 19:29:07.850392   11017 out.go:252]   - Booting up control plane ...
	I1212 19:29:07.850489   11017 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 19:29:07.850583   11017 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 19:29:07.850678   11017 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 19:29:07.862648   11017 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 19:29:07.862790   11017 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 19:29:07.870561   11017 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 19:29:07.870827   11017 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 19:29:07.870898   11017 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 19:29:07.964916   11017 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 19:29:07.965099   11017 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 19:29:08.466368   11017 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.601603ms
	I1212 19:29:08.469003   11017 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1212 19:29:08.469113   11017 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1212 19:29:08.469251   11017 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1212 19:29:08.469383   11017 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1212 19:29:09.481677   11017 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.012542756s
	I1212 19:29:10.559534   11017 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.090371298s
	I1212 19:29:11.970324   11017 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501193265s
	I1212 19:29:11.984046   11017 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 19:29:11.992393   11017 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 19:29:12.000599   11017 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 19:29:12.000880   11017 kubeadm.go:319] [mark-control-plane] Marking the node addons-410014 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 19:29:12.008087   11017 kubeadm.go:319] [bootstrap-token] Using token: b6z8qq.wplclg88br34tsuo
	I1212 19:29:12.009442   11017 out.go:252]   - Configuring RBAC rules ...
	I1212 19:29:12.009586   11017 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 19:29:12.015302   11017 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 19:29:12.019739   11017 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 19:29:12.021925   11017 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 19:29:12.024024   11017 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 19:29:12.027063   11017 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 19:29:12.375859   11017 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 19:29:12.798671   11017 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1212 19:29:13.375753   11017 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1212 19:29:13.376423   11017 kubeadm.go:319] 
	I1212 19:29:13.376514   11017 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1212 19:29:13.376525   11017 kubeadm.go:319] 
	I1212 19:29:13.376643   11017 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1212 19:29:13.376656   11017 kubeadm.go:319] 
	I1212 19:29:13.376676   11017 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1212 19:29:13.376767   11017 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 19:29:13.376815   11017 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 19:29:13.376824   11017 kubeadm.go:319] 
	I1212 19:29:13.376905   11017 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1212 19:29:13.376917   11017 kubeadm.go:319] 
	I1212 19:29:13.376973   11017 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 19:29:13.376979   11017 kubeadm.go:319] 
	I1212 19:29:13.377022   11017 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1212 19:29:13.377142   11017 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 19:29:13.377249   11017 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 19:29:13.377264   11017 kubeadm.go:319] 
	I1212 19:29:13.377399   11017 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 19:29:13.377503   11017 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1212 19:29:13.377518   11017 kubeadm.go:319] 
	I1212 19:29:13.377583   11017 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token b6z8qq.wplclg88br34tsuo \
	I1212 19:29:13.377720   11017 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c49fb95e06ac971f3e9f3fbce39707c55b5d19bac31f7f0c0750449d5e9f38c \
	I1212 19:29:13.377749   11017 kubeadm.go:319] 	--control-plane 
	I1212 19:29:13.377759   11017 kubeadm.go:319] 
	I1212 19:29:13.377885   11017 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1212 19:29:13.377901   11017 kubeadm.go:319] 
	I1212 19:29:13.377989   11017 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token b6z8qq.wplclg88br34tsuo \
	I1212 19:29:13.378116   11017 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c49fb95e06ac971f3e9f3fbce39707c55b5d19bac31f7f0c0750449d5e9f38c 
	I1212 19:29:13.379784   11017 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1212 19:29:13.379893   11017 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 19:29:13.379918   11017 cni.go:84] Creating CNI manager for ""
	I1212 19:29:13.379925   11017 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 19:29:13.381238   11017 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1212 19:29:13.382239   11017 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 19:29:13.386134   11017 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1212 19:29:13.386151   11017 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1212 19:29:13.398586   11017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 19:29:13.583248   11017 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 19:29:13.583373   11017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:29:13.583373   11017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-410014 minikube.k8s.io/updated_at=2025_12_12T19_29_13_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300 minikube.k8s.io/name=addons-410014 minikube.k8s.io/primary=true
	I1212 19:29:13.592892   11017 ops.go:34] apiserver oom_adj: -16
	I1212 19:29:13.659854   11017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:29:14.160783   11017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:29:14.660045   11017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:29:15.160201   11017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:29:15.660628   11017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:29:16.160088   11017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:29:16.660702   11017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:29:17.159953   11017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:29:17.660406   11017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:29:18.160829   11017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:29:18.660729   11017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:29:18.725227   11017 kubeadm.go:1114] duration metric: took 5.141905239s to wait for elevateKubeSystemPrivileges
	I1212 19:29:18.725282   11017 kubeadm.go:403] duration metric: took 13.803646642s to StartCluster
	I1212 19:29:18.725306   11017 settings.go:142] acquiring lock: {Name:mk7b47e673a4fe47e1d03b804f21c4b8c19bdcb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:29:18.725411   11017 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 19:29:18.725853   11017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/kubeconfig: {Name:mkf945a9079d724e4b1deff5cd122f8b2719dc1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:29:18.726053   11017 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 19:29:18.726077   11017 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 19:29:18.726132   11017 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1212 19:29:18.726256   11017 addons.go:70] Setting yakd=true in profile "addons-410014"
	I1212 19:29:18.726303   11017 addons.go:239] Setting addon yakd=true in "addons-410014"
	I1212 19:29:18.726313   11017 config.go:182] Loaded profile config "addons-410014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:29:18.726333   11017 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:29:18.726337   11017 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-410014"
	I1212 19:29:18.726316   11017 addons.go:70] Setting inspektor-gadget=true in profile "addons-410014"
	I1212 19:29:18.726361   11017 addons.go:70] Setting cloud-spanner=true in profile "addons-410014"
	I1212 19:29:18.726367   11017 addons.go:239] Setting addon inspektor-gadget=true in "addons-410014"
	I1212 19:29:18.726371   11017 addons.go:239] Setting addon cloud-spanner=true in "addons-410014"
	I1212 19:29:18.726393   11017 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:29:18.726396   11017 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:29:18.726625   11017 addons.go:70] Setting registry-creds=true in profile "addons-410014"
	I1212 19:29:18.726652   11017 addons.go:239] Setting addon registry-creds=true in "addons-410014"
	I1212 19:29:18.726681   11017 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:29:18.726875   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:29:18.726891   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:29:18.726915   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:29:18.726987   11017 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-410014"
	I1212 19:29:18.727012   11017 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-410014"
	I1212 19:29:18.727099   11017 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-410014"
	I1212 19:29:18.727120   11017 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-410014"
	I1212 19:29:18.727122   11017 addons.go:70] Setting storage-provisioner=true in profile "addons-410014"
	I1212 19:29:18.727144   11017 addons.go:239] Setting addon storage-provisioner=true in "addons-410014"
	I1212 19:29:18.727164   11017 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:29:18.727167   11017 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:29:18.727171   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:29:18.727295   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:29:18.727342   11017 addons.go:70] Setting registry=true in profile "addons-410014"
	I1212 19:29:18.727362   11017 addons.go:239] Setting addon registry=true in "addons-410014"
	I1212 19:29:18.727385   11017 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:29:18.727621   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:29:18.727665   11017 addons.go:70] Setting volumesnapshots=true in profile "addons-410014"
	I1212 19:29:18.727692   11017 addons.go:239] Setting addon volumesnapshots=true in "addons-410014"
	I1212 19:29:18.727720   11017 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:29:18.727794   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:29:18.728170   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:29:18.729206   11017 addons.go:70] Setting volcano=true in profile "addons-410014"
	I1212 19:29:18.729227   11017 addons.go:239] Setting addon volcano=true in "addons-410014"
	I1212 19:29:18.729263   11017 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:29:18.729762   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:29:18.731010   11017 addons.go:70] Setting gcp-auth=true in profile "addons-410014"
	I1212 19:29:18.731037   11017 mustload.go:66] Loading cluster: addons-410014
	I1212 19:29:18.731220   11017 config.go:182] Loaded profile config "addons-410014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:29:18.731498   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:29:18.727640   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:29:18.732710   11017 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-410014"
	I1212 19:29:18.732834   11017 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-410014"
	I1212 19:29:18.732871   11017 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:29:18.733334   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:29:18.727648   11017 addons.go:70] Setting metrics-server=true in profile "addons-410014"
	I1212 19:29:18.734650   11017 addons.go:239] Setting addon metrics-server=true in "addons-410014"
	I1212 19:29:18.734680   11017 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:29:18.734694   11017 out.go:179] * Verifying Kubernetes components...
	I1212 19:29:18.734913   11017 addons.go:70] Setting default-storageclass=true in profile "addons-410014"
	I1212 19:29:18.734934   11017 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-410014"
	I1212 19:29:18.736099   11017 addons.go:70] Setting ingress=true in profile "addons-410014"
	I1212 19:29:18.736124   11017 addons.go:239] Setting addon ingress=true in "addons-410014"
	I1212 19:29:18.736171   11017 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:29:18.736254   11017 addons.go:70] Setting ingress-dns=true in profile "addons-410014"
	I1212 19:29:18.736325   11017 addons.go:239] Setting addon ingress-dns=true in "addons-410014"
	I1212 19:29:18.726356   11017 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-410014"
	I1212 19:29:18.736576   11017 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:29:18.736615   11017 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:29:18.737043   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:29:18.737069   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:29:18.738348   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:29:18.741495   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:29:18.741935   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:29:18.741989   11017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:29:18.785063   11017 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1212 19:29:18.786755   11017 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1212 19:29:18.786776   11017 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1212 19:29:18.786921   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:18.802684   11017 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1212 19:29:18.806664   11017 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1212 19:29:18.807962   11017 out.go:179]   - Using image docker.io/registry:3.0.0
	I1212 19:29:18.808008   11017 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1212 19:29:18.809564   11017 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1212 19:29:18.811066   11017 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-410014"
	I1212 19:29:18.813532   11017 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:29:18.814057   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:29:18.811359   11017 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1212 19:29:18.814316   11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1212 19:29:18.814371   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:18.813338   11017 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1212 19:29:18.814552   11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1212 19:29:18.814612   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:18.815086   11017 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1212 19:29:18.815099   11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1212 19:29:18.815145   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:18.815205   11017 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1212 19:29:18.816163   11017 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1212 19:29:18.816543   11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1212 19:29:18.816595   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:18.827641   11017 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1212 19:29:18.827662   11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1212 19:29:18.827721   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:18.836044   11017 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 19:29:18.837296   11017 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:29:18.837318   11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 19:29:18.837393   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:18.849498   11017 addons.go:239] Setting addon default-storageclass=true in "addons-410014"
	I1212 19:29:18.849557   11017 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:29:18.850050   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:29:18.851797   11017 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1212 19:29:18.852112   11017 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1212 19:29:18.852506   11017 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1212 19:29:18.853234   11017 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1212 19:29:18.853252   11017 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1212 19:29:18.853318   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:18.855058   11017 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1212 19:29:18.855147   11017 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1212 19:29:18.856513   11017 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1212 19:29:18.857676   11017 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1212 19:29:18.858190   11017 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1212 19:29:18.858239   11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1212 19:29:18.858334   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:18.859802   11017 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1212 19:29:18.862388   11017 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1212 19:29:18.863656   11017 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1212 19:29:18.864749   11017 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1212 19:29:18.865883   11017 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1212 19:29:18.867237   11017 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1212 19:29:18.867308   11017 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1212 19:29:18.867402   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	W1212 19:29:18.869719   11017 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1212 19:29:18.873526   11017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:29:18.875122   11017 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1212 19:29:18.876195   11017 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1212 19:29:18.876212   11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1212 19:29:18.876258   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:18.886996   11017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:29:18.890034   11017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:29:18.899360   11017 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:29:18.902410   11017 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 19:29:18.902990   11017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:29:18.904050   11017 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1212 19:29:18.905386   11017 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1212 19:29:18.905406   11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1212 19:29:18.905456   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:18.905588   11017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:29:18.910480   11017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:29:18.910695   11017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:29:18.912154   11017 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1212 19:29:18.913583   11017 out.go:179]   - Using image docker.io/busybox:stable
	I1212 19:29:18.914053   11017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:29:18.914890   11017 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1212 19:29:18.914907   11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1212 19:29:18.914977   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:18.918120   11017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:29:18.918663   11017 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1212 19:29:18.922025   11017 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 19:29:18.922188   11017 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 19:29:18.922392   11017 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 19:29:18.922675   11017 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 19:29:18.922735   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:18.922830   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:18.936396   11017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:29:18.944846   11017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	W1212 19:29:18.946413   11017 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1212 19:29:18.946464   11017 retry.go:31] will retry after 249.941693ms: ssh: handshake failed: EOF
	I1212 19:29:18.959063   11017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:29:18.964504   11017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	W1212 19:29:18.964546   11017 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1212 19:29:18.965585   11017 retry.go:31] will retry after 204.116261ms: ssh: handshake failed: EOF
	W1212 19:29:18.967014   11017 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1212 19:29:18.969821   11017 retry.go:31] will retry after 165.388419ms: ssh: handshake failed: EOF
	I1212 19:29:18.972329   11017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	W1212 19:29:18.973777   11017 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1212 19:29:18.974934   11017 retry.go:31] will retry after 340.686317ms: ssh: handshake failed: EOF
	I1212 19:29:18.980644   11017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:29:18.987595   11017 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 19:29:19.054675   11017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1212 19:29:19.069475   11017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:29:19.079643   11017 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1212 19:29:19.079665   11017 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1212 19:29:19.079804   11017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1212 19:29:19.083608   11017 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1212 19:29:19.083626   11017 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1212 19:29:19.084846   11017 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1212 19:29:19.084934   11017 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1212 19:29:19.085856   11017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1212 19:29:19.095216   11017 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1212 19:29:19.095244   11017 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1212 19:29:19.095595   11017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1212 19:29:19.105356   11017 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1212 19:29:19.105379   11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1212 19:29:19.110748   11017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1212 19:29:19.118250   11017 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1212 19:29:19.118269   11017 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1212 19:29:19.118747   11017 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1212 19:29:19.118769   11017 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1212 19:29:19.122618   11017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:29:19.129505   11017 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1212 19:29:19.129522   11017 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1212 19:29:19.139775   11017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1212 19:29:19.153070   11017 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1212 19:29:19.153092   11017 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1212 19:29:19.160043   11017 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1212 19:29:19.160065   11017 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1212 19:29:19.176837   11017 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1212 19:29:19.176864   11017 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1212 19:29:19.194895   11017 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1212 19:29:19.194925   11017 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1212 19:29:19.223989   11017 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1212 19:29:19.224017   11017 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1212 19:29:19.231358   11017 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1212 19:29:19.231382   11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1212 19:29:19.238980   11017 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1212 19:29:19.240905   11017 node_ready.go:35] waiting up to 6m0s for node "addons-410014" to be "Ready" ...
	I1212 19:29:19.259668   11017 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 19:29:19.259695   11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1212 19:29:19.261906   11017 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1212 19:29:19.261927   11017 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1212 19:29:19.290954   11017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1212 19:29:19.309440   11017 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1212 19:29:19.309473   11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1212 19:29:19.320815   11017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 19:29:19.336135   11017 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1212 19:29:19.336163   11017 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1212 19:29:19.371479   11017 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 19:29:19.371502   11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1212 19:29:19.373845   11017 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1212 19:29:19.373867   11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1212 19:29:19.375620   11017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1212 19:29:19.401121   11017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1212 19:29:19.422416   11017 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 19:29:19.422445   11017 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 19:29:19.448397   11017 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1212 19:29:19.448439   11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1212 19:29:19.475810   11017 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 19:29:19.475848   11017 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 19:29:19.497128   11017 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1212 19:29:19.497159   11017 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1212 19:29:19.504954   11017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1212 19:29:19.511654   11017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 19:29:19.546680   11017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1212 19:29:19.745526   11017 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-410014" context rescaled to 1 replicas
	I1212 19:29:20.234315   11017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.123522534s)
	I1212 19:29:20.234358   11017 addons.go:495] Verifying addon ingress=true in "addons-410014"
	I1212 19:29:20.234362   11017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.094554054s)
	I1212 19:29:20.234388   11017 addons.go:495] Verifying addon registry=true in "addons-410014"
	I1212 19:29:20.234315   11017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.111663116s)
	I1212 19:29:20.236018   11017 out.go:179] * Verifying ingress addon...
	I1212 19:29:20.236018   11017 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-410014 service yakd-dashboard -n yakd-dashboard
	
	I1212 19:29:20.236120   11017 out.go:179] * Verifying registry addon...
	I1212 19:29:20.238179   11017 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1212 19:29:20.238212   11017 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1212 19:29:20.255256   11017 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1212 19:29:20.255564   11017 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1212 19:29:20.255585   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:20.600702   11017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.27983014s)
	W1212 19:29:20.600748   11017 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1212 19:29:20.600764   11017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.225112921s)
	I1212 19:29:20.600782   11017 retry.go:31] will retry after 322.333052ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1212 19:29:20.600859   11017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.199702909s)
	I1212 19:29:20.600910   11017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.09593367s)
	I1212 19:29:20.600977   11017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.089295322s)
	I1212 19:29:20.600995   11017 addons.go:495] Verifying addon metrics-server=true in "addons-410014"
	I1212 19:29:20.601214   11017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.054486284s)
	I1212 19:29:20.601241   11017 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-410014"
	I1212 19:29:20.603316   11017 out.go:179] * Verifying csi-hostpath-driver addon...
	I1212 19:29:20.605502   11017 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1212 19:29:20.608976   11017 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1212 19:29:20.608995   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:20.741028   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:20.741205   11017 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1212 19:29:20.741222   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:20.923525   11017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 19:29:21.108145   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:21.241246   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:21.241513   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1212 19:29:21.242988   11017 node_ready.go:57] node "addons-410014" has "Ready":"False" status (will retry)
	I1212 19:29:21.608408   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:21.741031   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:21.741154   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:22.107924   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:22.240838   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:22.241043   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:22.608608   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:22.740513   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:22.740551   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:23.107689   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:23.241387   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:23.241588   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:23.372569   11017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.448996301s)
	I1212 19:29:23.607880   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:23.741500   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:23.741784   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1212 19:29:23.743059   11017 node_ready.go:57] node "addons-410014" has "Ready":"False" status (will retry)
	I1212 19:29:24.108540   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:24.240729   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:24.240772   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:24.608895   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:24.741190   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:24.741339   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:25.108794   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:25.240897   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:25.241049   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:25.608600   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:25.740674   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:25.740782   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1212 19:29:25.743085   11017 node_ready.go:57] node "addons-410014" has "Ready":"False" status (will retry)
	I1212 19:29:26.108621   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:26.241213   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:26.241230   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:26.505576   11017 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1212 19:29:26.505634   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:26.522381   11017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:29:26.608328   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:26.620707   11017 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1212 19:29:26.632489   11017 addons.go:239] Setting addon gcp-auth=true in "addons-410014"
	I1212 19:29:26.632537   11017 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:29:26.632859   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:29:26.649620   11017 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1212 19:29:26.649689   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:26.665255   11017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:29:26.741817   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:26.742014   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:26.756294   11017 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1212 19:29:26.757393   11017 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1212 19:29:26.758502   11017 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1212 19:29:26.758513   11017 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1212 19:29:26.770500   11017 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1212 19:29:26.770517   11017 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1212 19:29:26.782146   11017 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1212 19:29:26.782159   11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1212 19:29:26.793619   11017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1212 19:29:27.069749   11017 addons.go:495] Verifying addon gcp-auth=true in "addons-410014"
	I1212 19:29:27.070989   11017 out.go:179] * Verifying gcp-auth addon...
	I1212 19:29:27.072818   11017 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1212 19:29:27.075607   11017 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1212 19:29:27.075623   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:27.107478   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:27.240731   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:27.240804   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:27.575698   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:27.607618   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:27.740681   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:27.740863   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1212 19:29:27.743317   11017 node_ready.go:57] node "addons-410014" has "Ready":"False" status (will retry)
	I1212 19:29:28.075896   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:28.107836   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:28.241009   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:28.241164   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:28.575970   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:28.607876   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:28.741353   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:28.741565   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:29.075239   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:29.108025   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:29.241170   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:29.241422   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:29.575290   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:29.608630   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:29.740953   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:29.741152   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:30.075028   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:30.107992   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:30.241291   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:30.241369   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1212 19:29:30.242691   11017 node_ready.go:57] node "addons-410014" has "Ready":"False" status (will retry)
	I1212 19:29:30.575030   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:30.607901   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:30.741065   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:30.741100   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:31.074983   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:31.107841   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:31.241166   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:31.241420   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:31.575211   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:31.608406   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:31.740558   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:31.740806   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:32.075724   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:32.107376   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:32.240289   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:32.240482   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1212 19:29:32.242880   11017 node_ready.go:57] node "addons-410014" has "Ready":"False" status (will retry)
	I1212 19:29:32.575388   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:32.608572   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:32.740707   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:32.740910   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:33.076096   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:33.108113   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:33.240531   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:33.240655   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:33.575393   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:33.608513   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:33.741052   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:33.741052   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:34.074994   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:34.107782   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:34.241000   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:34.241206   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:34.576173   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:34.608417   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:34.740845   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:34.740886   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1212 19:29:34.743429   11017 node_ready.go:57] node "addons-410014" has "Ready":"False" status (will retry)
	I1212 19:29:35.075940   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:35.107693   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:35.241375   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:35.241621   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:35.575022   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:35.608185   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:35.741465   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:35.741632   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:36.075091   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:36.107825   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:36.240920   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:36.241080   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:36.576064   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:36.607975   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:36.741348   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:36.741517   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:37.075036   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:37.108141   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:37.241335   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:37.241498   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1212 19:29:37.242714   11017 node_ready.go:57] node "addons-410014" has "Ready":"False" status (will retry)
	I1212 19:29:37.575132   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:37.608485   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:37.740864   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:37.740890   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:38.075870   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:38.107634   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:38.240807   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:38.240970   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:38.576191   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:38.608017   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:38.741256   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:38.741427   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:39.075176   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:39.107965   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:39.241245   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:39.241355   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:39.576241   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:39.608479   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:39.740728   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:39.740888   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1212 19:29:39.743386   11017 node_ready.go:57] node "addons-410014" has "Ready":"False" status (will retry)
	I1212 19:29:40.075821   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:40.107730   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:40.240817   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:40.240999   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:40.575051   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:40.608047   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:40.741167   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:40.741258   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:41.074980   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:41.107724   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:41.240939   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:41.241040   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:41.575867   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:41.607873   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:41.740943   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:41.741086   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:42.075969   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:42.107763   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:42.240702   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:42.240803   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1212 19:29:42.243251   11017 node_ready.go:57] node "addons-410014" has "Ready":"False" status (will retry)
	I1212 19:29:42.575753   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:42.607547   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:42.740785   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:42.740818   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:43.075823   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:43.107822   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:43.240978   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:43.241179   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:43.575079   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:43.608133   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:43.741341   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:43.741549   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:44.074962   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:44.107906   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:44.241209   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:44.241327   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:44.575881   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:44.607800   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:44.740911   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:44.741263   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1212 19:29:44.742569   11017 node_ready.go:57] node "addons-410014" has "Ready":"False" status (will retry)
	I1212 19:29:45.074857   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:45.107887   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:45.240977   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:45.241251   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:45.575054   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:45.607982   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:45.741039   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:45.741269   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:46.075958   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:46.107651   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:46.240826   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:46.240997   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:46.575000   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:46.608006   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:46.741112   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:46.741214   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:47.075729   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:47.107818   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:47.241034   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:47.241215   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1212 19:29:47.242607   11017 node_ready.go:57] node "addons-410014" has "Ready":"False" status (will retry)
	I1212 19:29:47.575814   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:47.607574   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:47.740666   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:47.740791   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:48.075594   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:48.108465   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:48.240789   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:48.240900   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:48.575905   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:48.607801   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:48.740985   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:48.741176   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:49.074672   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:49.107340   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:49.240266   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:49.240412   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1212 19:29:49.243005   11017 node_ready.go:57] node "addons-410014" has "Ready":"False" status (will retry)
	I1212 19:29:49.575574   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:49.608531   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:49.740749   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:49.740846   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:50.075606   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:50.108448   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:50.240528   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:50.240631   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:50.575569   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:50.608196   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:50.740225   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:50.740449   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:51.074908   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:51.107779   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:51.240844   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:51.240996   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:51.575982   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:51.607877   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:51.741381   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:51.741428   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1212 19:29:51.742945   11017 node_ready.go:57] node "addons-410014" has "Ready":"False" status (will retry)
	I1212 19:29:52.075445   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:52.108246   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:52.240291   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:52.240489   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:52.575130   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:52.608216   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:52.740374   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:52.740396   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:53.075445   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:53.108311   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:53.240774   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:53.240812   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:53.575693   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:53.607765   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:53.741053   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:53.741116   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:54.075068   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:54.107896   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:54.240941   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:54.241117   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1212 19:29:54.242689   11017 node_ready.go:57] node "addons-410014" has "Ready":"False" status (will retry)
	I1212 19:29:54.575907   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:54.607885   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:54.740986   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:54.741130   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:55.074874   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:55.107706   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:55.240842   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:55.241116   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:55.576249   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:55.608168   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:55.740399   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:55.740574   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:56.075103   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:56.107932   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:56.241447   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:56.241748   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1212 19:29:56.242822   11017 node_ready.go:57] node "addons-410014" has "Ready":"False" status (will retry)
	I1212 19:29:56.575690   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:56.607477   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:56.740705   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:56.740786   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:57.075413   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:57.108474   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:57.240747   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:57.240798   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:57.575887   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:57.607716   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:57.741104   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:57.741328   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:58.075175   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:58.108114   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:58.240333   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:58.240474   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1212 19:29:58.242867   11017 node_ready.go:57] node "addons-410014" has "Ready":"False" status (will retry)
	I1212 19:29:58.575493   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:58.608355   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:58.740567   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:58.740703   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:59.075379   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:59.108128   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:59.241381   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:59.241602   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:59.577823   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:59.610414   11017 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1212 19:29:59.610432   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:59.743591   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:59.743773   11017 node_ready.go:49] node "addons-410014" is "Ready"
	I1212 19:29:59.743798   11017 node_ready.go:38] duration metric: took 40.502862429s for node "addons-410014" to be "Ready" ...
	I1212 19:29:59.743822   11017 api_server.go:52] waiting for apiserver process to appear ...
	I1212 19:29:59.743876   11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:29:59.743874   11017 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1212 19:29:59.744004   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:59.766336   11017 api_server.go:72] duration metric: took 41.040229421s to wait for apiserver process to appear ...
	I1212 19:29:59.766365   11017 api_server.go:88] waiting for apiserver healthz status ...
	I1212 19:29:59.766387   11017 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 19:29:59.771759   11017 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1212 19:29:59.772894   11017 api_server.go:141] control plane version: v1.34.2
	I1212 19:29:59.772924   11017 api_server.go:131] duration metric: took 6.550961ms to wait for apiserver health ...
	I1212 19:29:59.772936   11017 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 19:29:59.846814   11017 system_pods.go:59] 20 kube-system pods found
	I1212 19:29:59.846871   11017 system_pods.go:61] "amd-gpu-device-plugin-t98v8" [78e1b7d3-1dbb-4ef6-83b4-e047490b8d24] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1212 19:29:59.846886   11017 system_pods.go:61] "coredns-66bc5c9577-gnk8c" [dd588b88-e022-4f67-a5af-50af77d298f5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 19:29:59.846904   11017 system_pods.go:61] "csi-hostpath-attacher-0" [7f2b28ab-1a28-4750-9528-1182ed5049c4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1212 19:29:59.846913   11017 system_pods.go:61] "csi-hostpath-resizer-0" [8b3a9abc-856b-4824-948f-e5453e2d51c1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1212 19:29:59.846922   11017 system_pods.go:61] "csi-hostpathplugin-h5gm6" [784a90a6-2593-43ba-9f22-1277078d2606] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1212 19:29:59.846928   11017 system_pods.go:61] "etcd-addons-410014" [2b8eb78b-6c09-471a-80b3-3b4967259475] Running
	I1212 19:29:59.846933   11017 system_pods.go:61] "kindnet-njtv5" [7736d1bc-22c7-4a24-bbc9-dac9a3b91833] Running
	I1212 19:29:59.846938   11017 system_pods.go:61] "kube-apiserver-addons-410014" [de750339-b8a2-4580-a136-db247a033560] Running
	I1212 19:29:59.846944   11017 system_pods.go:61] "kube-controller-manager-addons-410014" [29e83fe6-7e4d-418f-b08f-eb1d6940d87d] Running
	I1212 19:29:59.846952   11017 system_pods.go:61] "kube-ingress-dns-minikube" [e51ad06d-04da-4baf-af51-0454a4a0a8d5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1212 19:29:59.846957   11017 system_pods.go:61] "kube-proxy-z8p4j" [92d8c7ec-abbf-4989-9bf7-effc9afb1c8d] Running
	I1212 19:29:59.846963   11017 system_pods.go:61] "kube-scheduler-addons-410014" [f15b0648-926c-47f2-a4b0-0c59b833bc25] Running
	I1212 19:29:59.846970   11017 system_pods.go:61] "metrics-server-85b7d694d7-kh47q" [3cdc089b-338d-4aa8-95a4-b5ede11fe1b2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 19:29:59.846979   11017 system_pods.go:61] "nvidia-device-plugin-daemonset-qvjjb" [a38c714e-a797-40a2-8341-89a74eaf184e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1212 19:29:59.846988   11017 system_pods.go:61] "registry-6b586f9694-vrszm" [bd0b2d8b-989d-4909-9db8-2993ac9f26f3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1212 19:29:59.847013   11017 system_pods.go:61] "registry-creds-764b6fb674-j88nd" [8b8f037f-1ba2-44dc-90c5-1575e1dc8c8b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1212 19:29:59.847021   11017 system_pods.go:61] "registry-proxy-5lrqf" [4f1b686d-49b1-4fe4-a2ac-d475882292bb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1212 19:29:59.847029   11017 system_pods.go:61] "snapshot-controller-7d9fbc56b8-ngq92" [972ebf19-5f22-4eff-a9f9-3f7871840abc] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 19:29:59.847040   11017 system_pods.go:61] "snapshot-controller-7d9fbc56b8-nlxtw" [44c2c8a6-de6c-4940-84f7-51995b8ba442] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 19:29:59.847048   11017 system_pods.go:61] "storage-provisioner" [0149cd6a-d0e2-4856-bc29-8c4ee8117fb8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 19:29:59.847057   11017 system_pods.go:74] duration metric: took 74.11299ms to wait for pod list to return data ...
	I1212 19:29:59.847068   11017 default_sa.go:34] waiting for default service account to be created ...
	I1212 19:29:59.849937   11017 default_sa.go:45] found service account: "default"
	I1212 19:29:59.850088   11017 default_sa.go:55] duration metric: took 2.901806ms for default service account to be created ...
	I1212 19:29:59.850174   11017 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 19:29:59.945677   11017 system_pods.go:86] 20 kube-system pods found
	I1212 19:29:59.945709   11017 system_pods.go:89] "amd-gpu-device-plugin-t98v8" [78e1b7d3-1dbb-4ef6-83b4-e047490b8d24] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1212 19:29:59.945716   11017 system_pods.go:89] "coredns-66bc5c9577-gnk8c" [dd588b88-e022-4f67-a5af-50af77d298f5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 19:29:59.945723   11017 system_pods.go:89] "csi-hostpath-attacher-0" [7f2b28ab-1a28-4750-9528-1182ed5049c4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1212 19:29:59.945729   11017 system_pods.go:89] "csi-hostpath-resizer-0" [8b3a9abc-856b-4824-948f-e5453e2d51c1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1212 19:29:59.945734   11017 system_pods.go:89] "csi-hostpathplugin-h5gm6" [784a90a6-2593-43ba-9f22-1277078d2606] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1212 19:29:59.945739   11017 system_pods.go:89] "etcd-addons-410014" [2b8eb78b-6c09-471a-80b3-3b4967259475] Running
	I1212 19:29:59.945743   11017 system_pods.go:89] "kindnet-njtv5" [7736d1bc-22c7-4a24-bbc9-dac9a3b91833] Running
	I1212 19:29:59.945746   11017 system_pods.go:89] "kube-apiserver-addons-410014" [de750339-b8a2-4580-a136-db247a033560] Running
	I1212 19:29:59.945750   11017 system_pods.go:89] "kube-controller-manager-addons-410014" [29e83fe6-7e4d-418f-b08f-eb1d6940d87d] Running
	I1212 19:29:59.945755   11017 system_pods.go:89] "kube-ingress-dns-minikube" [e51ad06d-04da-4baf-af51-0454a4a0a8d5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1212 19:29:59.945763   11017 system_pods.go:89] "kube-proxy-z8p4j" [92d8c7ec-abbf-4989-9bf7-effc9afb1c8d] Running
	I1212 19:29:59.945766   11017 system_pods.go:89] "kube-scheduler-addons-410014" [f15b0648-926c-47f2-a4b0-0c59b833bc25] Running
	I1212 19:29:59.945771   11017 system_pods.go:89] "metrics-server-85b7d694d7-kh47q" [3cdc089b-338d-4aa8-95a4-b5ede11fe1b2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 19:29:59.945777   11017 system_pods.go:89] "nvidia-device-plugin-daemonset-qvjjb" [a38c714e-a797-40a2-8341-89a74eaf184e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1212 19:29:59.945783   11017 system_pods.go:89] "registry-6b586f9694-vrszm" [bd0b2d8b-989d-4909-9db8-2993ac9f26f3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1212 19:29:59.945791   11017 system_pods.go:89] "registry-creds-764b6fb674-j88nd" [8b8f037f-1ba2-44dc-90c5-1575e1dc8c8b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1212 19:29:59.945796   11017 system_pods.go:89] "registry-proxy-5lrqf" [4f1b686d-49b1-4fe4-a2ac-d475882292bb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1212 19:29:59.945802   11017 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ngq92" [972ebf19-5f22-4eff-a9f9-3f7871840abc] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 19:29:59.945808   11017 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nlxtw" [44c2c8a6-de6c-4940-84f7-51995b8ba442] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 19:29:59.945815   11017 system_pods.go:89] "storage-provisioner" [0149cd6a-d0e2-4856-bc29-8c4ee8117fb8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 19:29:59.945831   11017 retry.go:31] will retry after 216.29259ms: missing components: kube-dns
	I1212 19:30:00.075616   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:00.108858   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:00.166772   11017 system_pods.go:86] 20 kube-system pods found
	I1212 19:30:00.166810   11017 system_pods.go:89] "amd-gpu-device-plugin-t98v8" [78e1b7d3-1dbb-4ef6-83b4-e047490b8d24] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1212 19:30:00.166822   11017 system_pods.go:89] "coredns-66bc5c9577-gnk8c" [dd588b88-e022-4f67-a5af-50af77d298f5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 19:30:00.166835   11017 system_pods.go:89] "csi-hostpath-attacher-0" [7f2b28ab-1a28-4750-9528-1182ed5049c4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1212 19:30:00.166843   11017 system_pods.go:89] "csi-hostpath-resizer-0" [8b3a9abc-856b-4824-948f-e5453e2d51c1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1212 19:30:00.166851   11017 system_pods.go:89] "csi-hostpathplugin-h5gm6" [784a90a6-2593-43ba-9f22-1277078d2606] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1212 19:30:00.166858   11017 system_pods.go:89] "etcd-addons-410014" [2b8eb78b-6c09-471a-80b3-3b4967259475] Running
	I1212 19:30:00.166865   11017 system_pods.go:89] "kindnet-njtv5" [7736d1bc-22c7-4a24-bbc9-dac9a3b91833] Running
	I1212 19:30:00.166870   11017 system_pods.go:89] "kube-apiserver-addons-410014" [de750339-b8a2-4580-a136-db247a033560] Running
	I1212 19:30:00.166875   11017 system_pods.go:89] "kube-controller-manager-addons-410014" [29e83fe6-7e4d-418f-b08f-eb1d6940d87d] Running
	I1212 19:30:00.166884   11017 system_pods.go:89] "kube-ingress-dns-minikube" [e51ad06d-04da-4baf-af51-0454a4a0a8d5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1212 19:30:00.166889   11017 system_pods.go:89] "kube-proxy-z8p4j" [92d8c7ec-abbf-4989-9bf7-effc9afb1c8d] Running
	I1212 19:30:00.166895   11017 system_pods.go:89] "kube-scheduler-addons-410014" [f15b0648-926c-47f2-a4b0-0c59b833bc25] Running
	I1212 19:30:00.166902   11017 system_pods.go:89] "metrics-server-85b7d694d7-kh47q" [3cdc089b-338d-4aa8-95a4-b5ede11fe1b2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 19:30:00.166910   11017 system_pods.go:89] "nvidia-device-plugin-daemonset-qvjjb" [a38c714e-a797-40a2-8341-89a74eaf184e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1212 19:30:00.166920   11017 system_pods.go:89] "registry-6b586f9694-vrszm" [bd0b2d8b-989d-4909-9db8-2993ac9f26f3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1212 19:30:00.166928   11017 system_pods.go:89] "registry-creds-764b6fb674-j88nd" [8b8f037f-1ba2-44dc-90c5-1575e1dc8c8b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1212 19:30:00.166935   11017 system_pods.go:89] "registry-proxy-5lrqf" [4f1b686d-49b1-4fe4-a2ac-d475882292bb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1212 19:30:00.166945   11017 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ngq92" [972ebf19-5f22-4eff-a9f9-3f7871840abc] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 19:30:00.166954   11017 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nlxtw" [44c2c8a6-de6c-4940-84f7-51995b8ba442] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 19:30:00.166962   11017 system_pods.go:89] "storage-provisioner" [0149cd6a-d0e2-4856-bc29-8c4ee8117fb8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 19:30:00.166979   11017 retry.go:31] will retry after 367.633293ms: missing components: kube-dns
	I1212 19:30:00.241621   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:00.241658   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:00.538726   11017 system_pods.go:86] 20 kube-system pods found
	I1212 19:30:00.538761   11017 system_pods.go:89] "amd-gpu-device-plugin-t98v8" [78e1b7d3-1dbb-4ef6-83b4-e047490b8d24] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1212 19:30:00.538772   11017 system_pods.go:89] "coredns-66bc5c9577-gnk8c" [dd588b88-e022-4f67-a5af-50af77d298f5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 19:30:00.538782   11017 system_pods.go:89] "csi-hostpath-attacher-0" [7f2b28ab-1a28-4750-9528-1182ed5049c4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1212 19:30:00.538790   11017 system_pods.go:89] "csi-hostpath-resizer-0" [8b3a9abc-856b-4824-948f-e5453e2d51c1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1212 19:30:00.538799   11017 system_pods.go:89] "csi-hostpathplugin-h5gm6" [784a90a6-2593-43ba-9f22-1277078d2606] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1212 19:30:00.538805   11017 system_pods.go:89] "etcd-addons-410014" [2b8eb78b-6c09-471a-80b3-3b4967259475] Running
	I1212 19:30:00.538814   11017 system_pods.go:89] "kindnet-njtv5" [7736d1bc-22c7-4a24-bbc9-dac9a3b91833] Running
	I1212 19:30:00.538821   11017 system_pods.go:89] "kube-apiserver-addons-410014" [de750339-b8a2-4580-a136-db247a033560] Running
	I1212 19:30:00.538827   11017 system_pods.go:89] "kube-controller-manager-addons-410014" [29e83fe6-7e4d-418f-b08f-eb1d6940d87d] Running
	I1212 19:30:00.538836   11017 system_pods.go:89] "kube-ingress-dns-minikube" [e51ad06d-04da-4baf-af51-0454a4a0a8d5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1212 19:30:00.538843   11017 system_pods.go:89] "kube-proxy-z8p4j" [92d8c7ec-abbf-4989-9bf7-effc9afb1c8d] Running
	I1212 19:30:00.538850   11017 system_pods.go:89] "kube-scheduler-addons-410014" [f15b0648-926c-47f2-a4b0-0c59b833bc25] Running
	I1212 19:30:00.538859   11017 system_pods.go:89] "metrics-server-85b7d694d7-kh47q" [3cdc089b-338d-4aa8-95a4-b5ede11fe1b2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 19:30:00.538880   11017 system_pods.go:89] "nvidia-device-plugin-daemonset-qvjjb" [a38c714e-a797-40a2-8341-89a74eaf184e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1212 19:30:00.538892   11017 system_pods.go:89] "registry-6b586f9694-vrszm" [bd0b2d8b-989d-4909-9db8-2993ac9f26f3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1212 19:30:00.538902   11017 system_pods.go:89] "registry-creds-764b6fb674-j88nd" [8b8f037f-1ba2-44dc-90c5-1575e1dc8c8b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1212 19:30:00.538913   11017 system_pods.go:89] "registry-proxy-5lrqf" [4f1b686d-49b1-4fe4-a2ac-d475882292bb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1212 19:30:00.538922   11017 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ngq92" [972ebf19-5f22-4eff-a9f9-3f7871840abc] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 19:30:00.538934   11017 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nlxtw" [44c2c8a6-de6c-4940-84f7-51995b8ba442] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 19:30:00.538945   11017 system_pods.go:89] "storage-provisioner" [0149cd6a-d0e2-4856-bc29-8c4ee8117fb8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 19:30:00.538969   11017 retry.go:31] will retry after 364.206268ms: missing components: kube-dns
	I1212 19:30:00.575798   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:00.608552   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:00.741087   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:00.741250   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:00.908205   11017 system_pods.go:86] 20 kube-system pods found
	I1212 19:30:00.908237   11017 system_pods.go:89] "amd-gpu-device-plugin-t98v8" [78e1b7d3-1dbb-4ef6-83b4-e047490b8d24] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1212 19:30:00.908246   11017 system_pods.go:89] "coredns-66bc5c9577-gnk8c" [dd588b88-e022-4f67-a5af-50af77d298f5] Running
	I1212 19:30:00.908256   11017 system_pods.go:89] "csi-hostpath-attacher-0" [7f2b28ab-1a28-4750-9528-1182ed5049c4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1212 19:30:00.908264   11017 system_pods.go:89] "csi-hostpath-resizer-0" [8b3a9abc-856b-4824-948f-e5453e2d51c1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1212 19:30:00.908283   11017 system_pods.go:89] "csi-hostpathplugin-h5gm6" [784a90a6-2593-43ba-9f22-1277078d2606] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1212 19:30:00.908292   11017 system_pods.go:89] "etcd-addons-410014" [2b8eb78b-6c09-471a-80b3-3b4967259475] Running
	I1212 19:30:00.908296   11017 system_pods.go:89] "kindnet-njtv5" [7736d1bc-22c7-4a24-bbc9-dac9a3b91833] Running
	I1212 19:30:00.908301   11017 system_pods.go:89] "kube-apiserver-addons-410014" [de750339-b8a2-4580-a136-db247a033560] Running
	I1212 19:30:00.908305   11017 system_pods.go:89] "kube-controller-manager-addons-410014" [29e83fe6-7e4d-418f-b08f-eb1d6940d87d] Running
	I1212 19:30:00.908312   11017 system_pods.go:89] "kube-ingress-dns-minikube" [e51ad06d-04da-4baf-af51-0454a4a0a8d5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1212 19:30:00.908316   11017 system_pods.go:89] "kube-proxy-z8p4j" [92d8c7ec-abbf-4989-9bf7-effc9afb1c8d] Running
	I1212 19:30:00.908320   11017 system_pods.go:89] "kube-scheduler-addons-410014" [f15b0648-926c-47f2-a4b0-0c59b833bc25] Running
	I1212 19:30:00.908325   11017 system_pods.go:89] "metrics-server-85b7d694d7-kh47q" [3cdc089b-338d-4aa8-95a4-b5ede11fe1b2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 19:30:00.908338   11017 system_pods.go:89] "nvidia-device-plugin-daemonset-qvjjb" [a38c714e-a797-40a2-8341-89a74eaf184e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1212 19:30:00.908350   11017 system_pods.go:89] "registry-6b586f9694-vrszm" [bd0b2d8b-989d-4909-9db8-2993ac9f26f3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1212 19:30:00.908359   11017 system_pods.go:89] "registry-creds-764b6fb674-j88nd" [8b8f037f-1ba2-44dc-90c5-1575e1dc8c8b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1212 19:30:00.908368   11017 system_pods.go:89] "registry-proxy-5lrqf" [4f1b686d-49b1-4fe4-a2ac-d475882292bb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1212 19:30:00.908376   11017 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ngq92" [972ebf19-5f22-4eff-a9f9-3f7871840abc] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 19:30:00.908387   11017 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nlxtw" [44c2c8a6-de6c-4940-84f7-51995b8ba442] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 19:30:00.908390   11017 system_pods.go:89] "storage-provisioner" [0149cd6a-d0e2-4856-bc29-8c4ee8117fb8] Running
	I1212 19:30:00.908398   11017 system_pods.go:126] duration metric: took 1.058216593s to wait for k8s-apps to be running ...
	I1212 19:30:00.908405   11017 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 19:30:00.908448   11017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 19:30:00.921362   11017 system_svc.go:56] duration metric: took 12.947763ms WaitForService to wait for kubelet
	I1212 19:30:00.921385   11017 kubeadm.go:587] duration metric: took 42.195283s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 19:30:00.921406   11017 node_conditions.go:102] verifying NodePressure condition ...
	I1212 19:30:00.923714   11017 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1212 19:30:00.923735   11017 node_conditions.go:123] node cpu capacity is 8
	I1212 19:30:00.923748   11017 node_conditions.go:105] duration metric: took 2.335881ms to run NodePressure ...
	I1212 19:30:00.923758   11017 start.go:242] waiting for startup goroutines ...
	I1212 19:30:01.076667   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:01.177655   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:01.277848   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:01.277896   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:01.576242   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:01.608789   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:01.741265   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:01.741454   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:02.076217   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:02.108325   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:02.241685   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:02.241751   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:02.576991   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:02.608515   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:02.742048   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:02.743636   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:03.076475   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:03.177808   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:03.278079   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:03.278097   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:03.576259   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:03.609049   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:03.741812   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:03.741991   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:04.075865   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:04.108558   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:04.241233   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:04.241345   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:04.575820   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:04.608994   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:04.741593   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:04.741719   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:05.076495   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:05.108706   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:05.240999   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:05.241062   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:05.575785   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:05.607943   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:05.741226   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:05.741375   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:06.075664   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:06.107653   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:06.240927   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:06.241070   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:06.575475   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:06.608522   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:06.740911   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:06.740940   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:07.075348   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:07.108741   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:07.240959   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:07.241003   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:07.575392   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:07.608506   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:07.740624   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:07.740837   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:08.076082   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:08.108257   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:08.240655   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:08.240761   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:08.575314   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:08.608340   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:08.740639   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:08.740750   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:09.075083   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:09.108327   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:09.240387   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:09.240459   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:09.575907   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:09.608108   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:09.741707   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:09.741773   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:10.075186   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:10.108044   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:10.240836   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:10.241068   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:10.575164   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:10.608448   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:10.740318   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:10.740339   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:11.075638   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:11.107640   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:11.240912   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:11.240927   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:11.575347   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:11.608399   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:11.740613   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:11.740691   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:12.075066   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:12.108016   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:12.241247   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:12.241317   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:12.575764   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:12.607877   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:12.741251   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:12.741308   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:13.075416   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:13.108699   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:13.241202   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:13.241319   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:13.575932   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:13.608815   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:13.741197   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:13.741197   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:14.075780   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:14.107849   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:14.241708   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:14.241707   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:14.574858   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:14.607935   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:14.741303   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:14.741380   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:15.075442   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:15.108568   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:15.240770   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:15.240876   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:15.575212   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:15.608243   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:15.741297   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:15.741415   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:16.076054   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:16.107996   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:16.241442   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:16.241578   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:16.576007   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:16.608320   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:16.741554   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:16.741661   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:17.076154   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:17.108247   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:17.241608   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:17.241646   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:17.575916   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:17.608031   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:17.740978   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:17.741109   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:18.075668   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:18.107877   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:18.241574   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:18.241627   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:18.576236   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:18.608490   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:18.740548   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:18.740613   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:19.075328   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:19.108359   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:19.240581   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:19.240924   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:19.575613   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:19.607777   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:19.740919   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:19.740971   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:20.075561   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:20.108612   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:20.240951   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:20.241047   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:20.575141   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:20.608340   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:20.740591   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:20.740672   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:21.076002   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:21.108104   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:21.241150   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:21.241228   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:21.575672   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:21.607774   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:21.741066   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:21.741102   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:22.075402   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:22.110400   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:22.240508   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:22.240529   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:22.575883   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:22.608058   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:22.741254   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:22.741345   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:23.075513   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:23.108760   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:23.241202   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:23.241234   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:23.575545   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:23.608569   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:23.740982   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:23.741053   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:24.075753   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:24.107676   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:24.241037   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:24.241269   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:24.575650   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:24.607556   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:24.740659   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:24.740765   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:25.074833   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:25.107866   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:25.241064   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:25.241100   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:25.575504   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:25.608671   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:25.740838   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:25.740872   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:26.075401   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:26.108527   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:26.240827   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:26.241037   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:26.575358   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:26.608652   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:26.740953   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:26.741087   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:27.075214   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:27.108208   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:27.241441   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:27.241517   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:27.575866   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:27.607844   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:27.741145   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:27.741151   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:28.075706   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:28.107957   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:28.241157   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:28.241383   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:28.575720   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:28.607842   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:28.741155   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:28.741161   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:29.075577   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:29.108446   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:29.240619   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:29.240719   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:29.575030   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:29.608055   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:29.741294   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:29.741294   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:30.075798   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:30.107810   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:30.240864   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:30.240978   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:30.575535   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:30.608578   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:30.740622   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:30.740874   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:31.075843   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:31.107851   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:31.241223   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:31.241262   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:31.576238   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:31.608417   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:31.740493   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:31.740630   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:32.075759   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:32.108609   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:32.240634   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:32.240657   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:32.576183   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:32.608489   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:32.740589   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:32.740720   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:33.075813   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:33.107906   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:33.241243   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:33.241254   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:33.575354   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:33.608424   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:33.740475   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:33.740575   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:34.075958   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:34.107990   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:34.241034   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:34.241152   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:34.575641   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:34.607797   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:34.740920   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:34.740926   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:35.075135   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:35.108071   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:35.241232   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:35.241464   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:35.575832   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:35.608053   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:35.741171   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:35.741406   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:36.075556   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:36.107938   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:36.241552   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:36.241566   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:36.576234   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:36.608603   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:36.740858   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:36.741042   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:37.074886   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:37.108755   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:37.241477   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:37.241511   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:37.576532   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:37.609652   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:37.741684   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:37.741929   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:38.075643   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:38.108124   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:38.241812   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:38.241922   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:38.575588   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:38.609080   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:38.741815   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:38.741861   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:39.075606   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:39.109327   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:39.242694   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:39.242733   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:39.575100   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:39.608452   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:39.740765   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:39.740896   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:40.076842   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:40.108921   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:40.242983   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:40.243615   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:40.575439   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:40.609097   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:40.741822   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:40.741844   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:41.075908   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:41.108117   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:41.241828   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:41.241923   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:41.575182   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:41.608858   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:41.740909   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:41.741147   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:42.075942   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:42.108548   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:42.241133   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:42.241219   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:42.575900   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:42.608902   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:42.741258   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:42.741364   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:43.075447   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:43.109628   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:43.241261   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:43.241429   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:43.575839   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:43.608209   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:43.742079   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:43.742153   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:44.076195   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:44.108792   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:44.241477   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:44.241620   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:44.576101   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:44.609048   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:44.742039   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:44.742137   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:45.075420   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:45.108754   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:45.241297   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:45.241474   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:45.575974   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:45.608887   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:45.742198   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:45.742947   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:46.075574   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:46.204305   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:46.241543   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:46.241650   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:46.576346   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:46.610453   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:46.740766   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:46.740916   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:47.076248   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:47.108841   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:47.241487   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:47.241522   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:47.575993   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:47.607982   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:47.741375   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:47.741409   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:48.076143   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:48.108991   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:48.241970   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:48.241966   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:48.575609   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:48.607761   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:48.741196   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:48.741291   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:49.076233   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:49.108639   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:49.241235   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:49.241402   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:49.576113   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:49.608666   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:49.741389   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:49.741555   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:50.076054   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:50.108758   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:50.241500   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:50.241665   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:50.575803   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:50.608499   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:50.741134   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:50.741162   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:51.075603   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:51.109237   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:51.241927   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:51.241927   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:51.575170   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:51.608321   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:51.740617   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:51.740678   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:52.075682   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:52.108250   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:52.242177   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:52.242220   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:52.577093   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:52.609930   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:52.742162   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:52.742229   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:53.075725   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:53.108464   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:53.241347   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:53.241520   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:53.576353   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:53.608546   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:53.740921   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:53.741019   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:54.075171   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:54.108142   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:54.241989   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:54.242039   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:54.576605   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:54.609205   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:54.741695   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:54.741882   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:55.075069   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:55.108005   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:55.241285   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:55.241300   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:55.575597   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:55.608461   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:55.741040   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:55.741204   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:56.075599   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:56.176982   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:56.241731   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:56.241783   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:56.575392   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:56.608990   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:56.741747   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:56.741799   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:57.075094   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:57.108494   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:57.240642   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:57.240683   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:57.575441   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:57.609519   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:57.741926   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:57.742532   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:58.075451   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:58.109875   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:58.241784   11017 kapi.go:107] duration metric: took 1m38.003601007s to wait for kubernetes.io/minikube-addons=registry ...
	I1212 19:30:58.241891   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:58.576875   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:58.608725   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:58.741228   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:59.075948   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:59.108384   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:59.240841   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:59.575482   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:59.608814   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:59.741606   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:00.136011   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:00.136125   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:00.241116   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:00.576094   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:00.608608   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:00.741199   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:01.075992   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:01.108425   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:01.241581   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:01.576422   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:01.608435   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:01.741576   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:02.076746   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:02.177885   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:02.241771   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:02.575476   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:02.608806   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:02.741011   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:03.076115   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:03.108698   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:03.241738   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:03.600484   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:03.632926   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:03.741804   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:04.076842   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:04.108680   11017 kapi.go:107] duration metric: took 1m43.503172766s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1212 19:31:04.242628   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:04.575751   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:04.741518   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:05.076136   11017 kapi.go:107] duration metric: took 1m38.003317365s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1212 19:31:05.077412   11017 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-410014 cluster.
	I1212 19:31:05.078441   11017 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1212 19:31:05.079448   11017 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1212 19:31:05.243131   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:05.741679   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:06.241825   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:06.740602   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:07.241811   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:07.741248   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:08.268297   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:08.741457   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:09.242106   11017 kapi.go:107] duration metric: took 1m49.003890643s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1212 19:31:09.246377   11017 out.go:179] * Enabled addons: registry-creds, storage-provisioner, ingress-dns, cloud-spanner, nvidia-device-plugin, yakd, default-storageclass, amd-gpu-device-plugin, inspektor-gadget, metrics-server, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I1212 19:31:09.247679   11017 addons.go:530] duration metric: took 1m50.521544274s for enable addons: enabled=[registry-creds storage-provisioner ingress-dns cloud-spanner nvidia-device-plugin yakd default-storageclass amd-gpu-device-plugin inspektor-gadget metrics-server storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I1212 19:31:09.247724   11017 start.go:247] waiting for cluster config update ...
	I1212 19:31:09.247750   11017 start.go:256] writing updated cluster config ...
	I1212 19:31:09.248014   11017 ssh_runner.go:195] Run: rm -f paused
	I1212 19:31:09.251916   11017 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 19:31:09.254911   11017 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gnk8c" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 19:31:09.258590   11017 pod_ready.go:94] pod "coredns-66bc5c9577-gnk8c" is "Ready"
	I1212 19:31:09.258608   11017 pod_ready.go:86] duration metric: took 3.673079ms for pod "coredns-66bc5c9577-gnk8c" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 19:31:09.260217   11017 pod_ready.go:83] waiting for pod "etcd-addons-410014" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 19:31:09.263582   11017 pod_ready.go:94] pod "etcd-addons-410014" is "Ready"
	I1212 19:31:09.263603   11017 pod_ready.go:86] duration metric: took 3.371109ms for pod "etcd-addons-410014" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 19:31:09.265327   11017 pod_ready.go:83] waiting for pod "kube-apiserver-addons-410014" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 19:31:09.268508   11017 pod_ready.go:94] pod "kube-apiserver-addons-410014" is "Ready"
	I1212 19:31:09.268526   11017 pod_ready.go:86] duration metric: took 3.181394ms for pod "kube-apiserver-addons-410014" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 19:31:09.269998   11017 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-410014" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 19:31:09.654877   11017 pod_ready.go:94] pod "kube-controller-manager-addons-410014" is "Ready"
	I1212 19:31:09.654900   11017 pod_ready.go:86] duration metric: took 384.887765ms for pod "kube-controller-manager-addons-410014" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 19:31:09.855381   11017 pod_ready.go:83] waiting for pod "kube-proxy-z8p4j" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 19:31:10.255089   11017 pod_ready.go:94] pod "kube-proxy-z8p4j" is "Ready"
	I1212 19:31:10.255111   11017 pod_ready.go:86] duration metric: took 399.708398ms for pod "kube-proxy-z8p4j" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 19:31:10.455906   11017 pod_ready.go:83] waiting for pod "kube-scheduler-addons-410014" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 19:31:10.855494   11017 pod_ready.go:94] pod "kube-scheduler-addons-410014" is "Ready"
	I1212 19:31:10.855518   11017 pod_ready.go:86] duration metric: took 399.59016ms for pod "kube-scheduler-addons-410014" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 19:31:10.855529   11017 pod_ready.go:40] duration metric: took 1.603579105s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 19:31:10.899564   11017 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1212 19:31:10.901406   11017 out.go:179] * Done! kubectl is now configured to use "addons-410014" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 12 19:33:50 addons-410014 crio[776]: time="2025-12-12T19:33:50.135836363Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-w94dm/POD" id=3bd41fe7-da79-48f1-8ae6-956dd415dc01 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 19:33:50 addons-410014 crio[776]: time="2025-12-12T19:33:50.13593803Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 19:33:50 addons-410014 crio[776]: time="2025-12-12T19:33:50.145863101Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-w94dm Namespace:default ID:ed0a33840e59dd05bb004b5c1c743e5b6aa374694a189795a546ca0e0a8ca827 UID:b378818a-9e5c-4954-8b60-d4111daccd62 NetNS:/var/run/netns/f1310a94-51f3-4a70-a1d5-715fc1dcc6cd Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000536198}] Aliases:map[]}"
	Dec 12 19:33:50 addons-410014 crio[776]: time="2025-12-12T19:33:50.145888798Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-w94dm to CNI network \"kindnet\" (type=ptp)"
	Dec 12 19:33:50 addons-410014 crio[776]: time="2025-12-12T19:33:50.15613093Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-w94dm Namespace:default ID:ed0a33840e59dd05bb004b5c1c743e5b6aa374694a189795a546ca0e0a8ca827 UID:b378818a-9e5c-4954-8b60-d4111daccd62 NetNS:/var/run/netns/f1310a94-51f3-4a70-a1d5-715fc1dcc6cd Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000536198}] Aliases:map[]}"
	Dec 12 19:33:50 addons-410014 crio[776]: time="2025-12-12T19:33:50.156246874Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-w94dm for CNI network kindnet (type=ptp)"
	Dec 12 19:33:50 addons-410014 crio[776]: time="2025-12-12T19:33:50.157031751Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 12 19:33:50 addons-410014 crio[776]: time="2025-12-12T19:33:50.157824991Z" level=info msg="Ran pod sandbox ed0a33840e59dd05bb004b5c1c743e5b6aa374694a189795a546ca0e0a8ca827 with infra container: default/hello-world-app-5d498dc89-w94dm/POD" id=3bd41fe7-da79-48f1-8ae6-956dd415dc01 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 19:33:50 addons-410014 crio[776]: time="2025-12-12T19:33:50.159005634Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=2de7e4f4-a5c7-4287-866d-19adfd2437ab name=/runtime.v1.ImageService/ImageStatus
	Dec 12 19:33:50 addons-410014 crio[776]: time="2025-12-12T19:33:50.159145942Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=2de7e4f4-a5c7-4287-866d-19adfd2437ab name=/runtime.v1.ImageService/ImageStatus
	Dec 12 19:33:50 addons-410014 crio[776]: time="2025-12-12T19:33:50.159192933Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=2de7e4f4-a5c7-4287-866d-19adfd2437ab name=/runtime.v1.ImageService/ImageStatus
	Dec 12 19:33:50 addons-410014 crio[776]: time="2025-12-12T19:33:50.159796271Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=a8700a7e-00e3-4707-a3f9-90efbeb29ef5 name=/runtime.v1.ImageService/PullImage
	Dec 12 19:33:50 addons-410014 crio[776]: time="2025-12-12T19:33:50.164130135Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 12 19:33:50 addons-410014 crio[776]: time="2025-12-12T19:33:50.934949187Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" id=a8700a7e-00e3-4707-a3f9-90efbeb29ef5 name=/runtime.v1.ImageService/PullImage
	Dec 12 19:33:50 addons-410014 crio[776]: time="2025-12-12T19:33:50.935494299Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=1ab40ce1-478b-45cb-bcb9-25b6c4c02891 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 19:33:50 addons-410014 crio[776]: time="2025-12-12T19:33:50.936733817Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=21da7e39-da3a-41af-85f9-16984e34239a name=/runtime.v1.ImageService/ImageStatus
	Dec 12 19:33:50 addons-410014 crio[776]: time="2025-12-12T19:33:50.939864821Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-w94dm/hello-world-app" id=07da0235-ee67-4b44-adc8-5735f2565c5d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 19:33:50 addons-410014 crio[776]: time="2025-12-12T19:33:50.939990348Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 19:33:50 addons-410014 crio[776]: time="2025-12-12T19:33:50.947025793Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 19:33:50 addons-410014 crio[776]: time="2025-12-12T19:33:50.947163057Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/22f2746a5f1715d94bf0bce1d79415ae3a75770920bce7c2a0d6555ee8eb539b/merged/etc/passwd: no such file or directory"
	Dec 12 19:33:50 addons-410014 crio[776]: time="2025-12-12T19:33:50.94718566Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/22f2746a5f1715d94bf0bce1d79415ae3a75770920bce7c2a0d6555ee8eb539b/merged/etc/group: no such file or directory"
	Dec 12 19:33:50 addons-410014 crio[776]: time="2025-12-12T19:33:50.9474151Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 19:33:50 addons-410014 crio[776]: time="2025-12-12T19:33:50.976741093Z" level=info msg="Created container eacdac740a602d3bb0c8cb111c24e1f1726d230ebcaf4b3111dc0a21d147484d: default/hello-world-app-5d498dc89-w94dm/hello-world-app" id=07da0235-ee67-4b44-adc8-5735f2565c5d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 19:33:50 addons-410014 crio[776]: time="2025-12-12T19:33:50.977267831Z" level=info msg="Starting container: eacdac740a602d3bb0c8cb111c24e1f1726d230ebcaf4b3111dc0a21d147484d" id=9ad62b0e-1c9a-4b06-86be-4f27282b2e3e name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 19:33:50 addons-410014 crio[776]: time="2025-12-12T19:33:50.979017393Z" level=info msg="Started container" PID=9313 containerID=eacdac740a602d3bb0c8cb111c24e1f1726d230ebcaf4b3111dc0a21d147484d description=default/hello-world-app-5d498dc89-w94dm/hello-world-app id=9ad62b0e-1c9a-4b06-86be-4f27282b2e3e name=/runtime.v1.RuntimeService/StartContainer sandboxID=ed0a33840e59dd05bb004b5c1c743e5b6aa374694a189795a546ca0e0a8ca827
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	eacdac740a602       docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86                                        Less than a second ago   Running             hello-world-app                          0                   ed0a33840e59d       hello-world-app-5d498dc89-w94dm             default
	81a4778d8c23e       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             About a minute ago       Running             registry-creds                           0                   737c365f29755       registry-creds-764b6fb674-j88nd             kube-system
	39f4bda0b8532       public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c                                           2 minutes ago            Running             nginx                                    0                   f0b987ac3f350       nginx                                       default
	ce6f0643a6402       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago            Running             busybox                                  0                   e06d0ea517eb6       busybox                                     default
	b49b6518ed002       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad                             2 minutes ago            Running             controller                               0                   ddaa16da73f43       ingress-nginx-controller-85d4c799dd-vgkhr   ingress-nginx
	53f30551a589c       a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e                                                                             2 minutes ago            Exited              patch                                    2                   53bcc506bb00c       ingress-nginx-admission-patch-k6zbn         ingress-nginx
	ebb75d365f34a       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 2 minutes ago            Running             gcp-auth                                 0                   589690ecdb123       gcp-auth-78565c9fb4-pl6ld                   gcp-auth
	76571c6136b47       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          2 minutes ago            Running             csi-snapshotter                          0                   8f747c84631fc       csi-hostpathplugin-h5gm6                    kube-system
	7f1863e417224       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago            Running             csi-provisioner                          0                   8f747c84631fc       csi-hostpathplugin-h5gm6                    kube-system
	5cd7aec5d9bbe       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            2 minutes ago            Running             liveness-probe                           0                   8f747c84631fc       csi-hostpathplugin-h5gm6                    kube-system
	9d3792f634584       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           2 minutes ago            Running             hostpath                                 0                   8f747c84631fc       csi-hostpathplugin-h5gm6                    kube-system
	e9263571afd91       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                2 minutes ago            Running             node-driver-registrar                    0                   8f747c84631fc       csi-hostpathplugin-h5gm6                    kube-system
	63bb623321fdc       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:ea428be7b01d41418fca4d91ae3dff6b037bdc0d42757e7ad392a38536488a1a                            2 minutes ago            Running             gadget                                   0                   3a4bf87c35c06       gadget-pd42c                                gadget
	cb6005d68d9a9       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              2 minutes ago            Running             registry-proxy                           0                   cc2003f2279cc       registry-proxy-5lrqf                        kube-system
	24cd917601d91       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   2 minutes ago            Running             csi-external-health-monitor-controller   0                   8f747c84631fc       csi-hostpathplugin-h5gm6                    kube-system
	3693e2f08cab4       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      2 minutes ago            Running             volume-snapshot-controller               0                   05169ea7fc7d8       snapshot-controller-7d9fbc56b8-nlxtw        kube-system
	029883bbbb102       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   2 minutes ago            Exited              create                                   0                   ade534ba05c56       ingress-nginx-admission-create-nc25l        ingress-nginx
	d28f0bff28d4a       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     2 minutes ago            Running             nvidia-device-plugin-ctr                 0                   1f762fd8e0b39       nvidia-device-plugin-daemonset-qvjjb        kube-system
	355df816d72de       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago            Running             csi-resizer                              0                   419f7cb7adfe3       csi-hostpath-resizer-0                      kube-system
	5378136ec2be9       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago            Running             amd-gpu-device-plugin                    0                   188812b9b6de3       amd-gpu-device-plugin-t98v8                 kube-system
	448e73e6cab52       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   75699320827f4       snapshot-controller-7d9fbc56b8-ngq92        kube-system
	f6b724bf055e8       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago            Running             csi-attacher                             0                   6a77b2e1ceac3       csi-hostpath-attacher-0                     kube-system
	24bb69257a2c0       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago            Running             local-path-provisioner                   0                   2dba54dad8ede       local-path-provisioner-648f6765c9-m6r4p     local-path-storage
	d5470be0baf62       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago            Running             minikube-ingress-dns                     0                   7bc185504490f       kube-ingress-dns-minikube                   kube-system
	db4d51a0d90e2       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               3 minutes ago            Running             cloud-spanner-emulator                   0                   2aa7ac5474885       cloud-spanner-emulator-5bdddb765-qmtwq      default
	54eea5a21a6fd       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago            Running             registry                                 0                   ea6bcb7203766       registry-6b586f9694-vrszm                   kube-system
	243ad3742fa43       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago            Running             yakd                                     0                   1e922c1e1b767       yakd-dashboard-5ff678cb9-cvcw2              yakd-dashboard
	203522604a8b9       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago            Running             metrics-server                           0                   81f30cdf86e91       metrics-server-85b7d694d7-kh47q             kube-system
	31bb87c8f5b44       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago            Running             coredns                                  0                   a1104fa3784e4       coredns-66bc5c9577-gnk8c                    kube-system
	30de3e37db155       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago            Running             storage-provisioner                      0                   9541d036fdd0c       storage-provisioner                         kube-system
	57cc761c4f0a4       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             4 minutes ago            Running             kindnet-cni                              0                   5a3a1aa8ca5fb       kindnet-njtv5                               kube-system
	dea3cfc0d651a       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                                             4 minutes ago            Running             kube-proxy                               0                   8b181b80d08ef       kube-proxy-z8p4j                            kube-system
	d28712ec6c409       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                                             4 minutes ago            Running             kube-controller-manager                  0                   171b11909e0bb       kube-controller-manager-addons-410014       kube-system
	22824f98cf9f9       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                                             4 minutes ago            Running             kube-apiserver                           0                   1a08e4525af22       kube-apiserver-addons-410014                kube-system
	624cd53ac7dff       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                                             4 minutes ago            Running             kube-scheduler                           0                   5b6889b29eeae       kube-scheduler-addons-410014                kube-system
	6fb90e1241345       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             4 minutes ago            Running             etcd                                     0                   081274d48ae7b       etcd-addons-410014                          kube-system
	
	
	==> coredns [31bb87c8f5b440ef6f409980279d69a279ee8873f638c558c1455812526d42f1] <==
	[INFO] 10.244.0.21:58858 - 22847 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000164007s
	[INFO] 10.244.0.21:44190 - 47432 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.0045179s
	[INFO] 10.244.0.21:50483 - 43991 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.00568251s
	[INFO] 10.244.0.21:45107 - 50207 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005875745s
	[INFO] 10.244.0.21:42313 - 35916 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.019864734s
	[INFO] 10.244.0.21:38381 - 59733 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004325543s
	[INFO] 10.244.0.21:51516 - 9918 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.006232411s
	[INFO] 10.244.0.21:49329 - 45328 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000802612s
	[INFO] 10.244.0.21:39988 - 63866 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.00229231s
	[INFO] 10.244.0.25:56875 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000271309s
	[INFO] 10.244.0.25:44941 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000194193s
	[INFO] 10.244.0.31:35809 - 43273 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000209169s
	[INFO] 10.244.0.31:47811 - 25002 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000293241s
	[INFO] 10.244.0.31:40406 - 58489 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000130448s
	[INFO] 10.244.0.31:43808 - 21676 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000192832s
	[INFO] 10.244.0.31:45562 - 40253 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000112817s
	[INFO] 10.244.0.31:57503 - 16219 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000160999s
	[INFO] 10.244.0.31:60813 - 12754 "A IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.004580284s
	[INFO] 10.244.0.31:57781 - 19780 "AAAA IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.004727143s
	[INFO] 10.244.0.31:43891 - 1400 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004770387s
	[INFO] 10.244.0.31:39996 - 4718 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.006553234s
	[INFO] 10.244.0.31:34198 - 62881 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.003771243s
	[INFO] 10.244.0.31:36152 - 45958 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004111473s
	[INFO] 10.244.0.31:60990 - 41221 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.00142715s
	[INFO] 10.244.0.31:43210 - 57625 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001538683s
	
	
	==> describe nodes <==
	Name:               addons-410014
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-410014
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300
	                    minikube.k8s.io/name=addons-410014
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T19_29_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-410014
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-410014"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 19:29:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-410014
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 19:33:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 19:33:47 +0000   Fri, 12 Dec 2025 19:29:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 19:33:47 +0000   Fri, 12 Dec 2025 19:29:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 19:33:47 +0000   Fri, 12 Dec 2025 19:29:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 19:33:47 +0000   Fri, 12 Dec 2025 19:29:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-410014
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c9831a2813dcaeb3cc17b596693b7bac
	  System UUID:                98c40f18-1184-413f-ae72-974e7ca63e13
	  Boot ID:                    50e046d9-33b0-4107-918f-f0a8d9513b10
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m40s
	  default                     cloud-spanner-emulator-5bdddb765-qmtwq       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m32s
	  default                     hello-world-app-5d498dc89-w94dm              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  gadget                      gadget-pd42c                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  gcp-auth                    gcp-auth-78565c9fb4-pl6ld                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m24s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-vgkhr    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m31s
	  kube-system                 amd-gpu-device-plugin-t98v8                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m52s
	  kube-system                 coredns-66bc5c9577-gnk8c                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m33s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 csi-hostpathplugin-h5gm6                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m52s
	  kube-system                 etcd-addons-410014                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m39s
	  kube-system                 kindnet-njtv5                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m33s
	  kube-system                 kube-apiserver-addons-410014                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m39s
	  kube-system                 kube-controller-manager-addons-410014        200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m39s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m32s
	  kube-system                 kube-proxy-z8p4j                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m33s
	  kube-system                 kube-scheduler-addons-410014                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m39s
	  kube-system                 metrics-server-85b7d694d7-kh47q              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m31s
	  kube-system                 nvidia-device-plugin-daemonset-qvjjb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m52s
	  kube-system                 registry-6b586f9694-vrszm                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m32s
	  kube-system                 registry-creds-764b6fb674-j88nd              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m32s
	  kube-system                 registry-proxy-5lrqf                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m52s
	  kube-system                 snapshot-controller-7d9fbc56b8-ngq92         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 snapshot-controller-7d9fbc56b8-nlxtw         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m32s
	  local-path-storage          local-path-provisioner-648f6765c9-m6r4p      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-cvcw2               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     4m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m31s  kube-proxy       
	  Normal  Starting                 4m39s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m39s  kubelet          Node addons-410014 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m39s  kubelet          Node addons-410014 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m39s  kubelet          Node addons-410014 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m34s  node-controller  Node addons-410014 event: Registered Node addons-410014 in Controller
	  Normal  NodeReady                3m52s  kubelet          Node addons-410014 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.086327] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026088] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.855140] kauditd_printk_skb: 47 callbacks suppressed
	[Dec12 19:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.016395] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023890] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023861] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023912] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023894] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +2.047754] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +4.031589] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +8.448131] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[Dec12 19:32] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[ +32.252714] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	
	
	==> etcd [6fb90e1241345ef6183ebbe960206cf20dfe7d2b12dc3ca243063ca515dd58b1] <==
	{"level":"warn","ts":"2025-12-12T19:29:10.010468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T19:29:10.016630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T19:29:10.027348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T19:29:10.034177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T19:29:10.041798Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T19:29:10.047785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T19:29:10.053988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T19:29:10.061447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T19:29:10.068137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T19:29:10.074230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T19:29:10.080603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T19:29:10.088142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T19:29:10.094189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T19:29:10.112440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T19:29:10.115559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T19:29:10.121504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T19:29:10.127265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T19:29:10.169193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T19:29:21.141995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T19:29:21.148474Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T19:29:47.543152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T19:29:47.549779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T19:29:47.563085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T19:29:47.569282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60876","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-12T19:30:59.037515Z","caller":"traceutil/trace.go:172","msg":"trace[1763838078] transaction","detail":"{read_only:false; response_revision:1149; number_of_response:1; }","duration":"118.564408ms","start":"2025-12-12T19:30:58.918936Z","end":"2025-12-12T19:30:59.037500Z","steps":["trace[1763838078] 'process raft request'  (duration: 118.474836ms)"],"step_count":1}
	
	
	==> gcp-auth [ebb75d365f34ad5affdfbfde57294ea476ed0d5ca8eca73e9c85726aff0bf6b1] <==
	2025/12/12 19:31:04 GCP Auth Webhook started!
	2025/12/12 19:31:11 Ready to marshal response ...
	2025/12/12 19:31:11 Ready to write response ...
	2025/12/12 19:31:11 Ready to marshal response ...
	2025/12/12 19:31:11 Ready to write response ...
	2025/12/12 19:31:11 Ready to marshal response ...
	2025/12/12 19:31:11 Ready to write response ...
	2025/12/12 19:31:25 Ready to marshal response ...
	2025/12/12 19:31:25 Ready to write response ...
	2025/12/12 19:31:30 Ready to marshal response ...
	2025/12/12 19:31:30 Ready to write response ...
	2025/12/12 19:31:37 Ready to marshal response ...
	2025/12/12 19:31:37 Ready to write response ...
	2025/12/12 19:31:37 Ready to marshal response ...
	2025/12/12 19:31:37 Ready to write response ...
	2025/12/12 19:31:37 Ready to marshal response ...
	2025/12/12 19:31:37 Ready to write response ...
	2025/12/12 19:31:44 Ready to marshal response ...
	2025/12/12 19:31:44 Ready to write response ...
	2025/12/12 19:31:53 Ready to marshal response ...
	2025/12/12 19:31:53 Ready to write response ...
	2025/12/12 19:33:49 Ready to marshal response ...
	2025/12/12 19:33:49 Ready to write response ...
	
	
	==> kernel <==
	 19:33:51 up 16 min,  0 user,  load average: 0.41, 0.74, 0.37
	Linux addons-410014 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [57cc761c4f0a4a4897b6a36f7a51c07f8f9b9d7923145c3005d02d41381917ca] <==
	I1212 19:31:49.236934       1 main.go:301] handling current node
	I1212 19:31:59.237385       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 19:31:59.237422       1 main.go:301] handling current node
	I1212 19:32:09.237397       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 19:32:09.237448       1 main.go:301] handling current node
	I1212 19:32:19.236704       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 19:32:19.236745       1 main.go:301] handling current node
	I1212 19:32:29.241741       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 19:32:29.241775       1 main.go:301] handling current node
	I1212 19:32:39.239585       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 19:32:39.239625       1 main.go:301] handling current node
	I1212 19:32:49.237449       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 19:32:49.237481       1 main.go:301] handling current node
	I1212 19:32:59.236655       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 19:32:59.236693       1 main.go:301] handling current node
	I1212 19:33:09.237724       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 19:33:09.237784       1 main.go:301] handling current node
	I1212 19:33:19.236566       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 19:33:19.236598       1 main.go:301] handling current node
	I1212 19:33:29.236575       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 19:33:29.236613       1 main.go:301] handling current node
	I1212 19:33:39.237816       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 19:33:39.237843       1 main.go:301] handling current node
	I1212 19:33:49.236703       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 19:33:49.236730       1 main.go:301] handling current node
	
	
	==> kube-apiserver [22824f98cf9f933f80e85a004433640955a438da23b09eee368ae6a14f2c12f2] <==
	W1212 19:29:59.409378       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.21.49:443: connect: connection refused
	W1212 19:29:59.409412       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.21.49:443: connect: connection refused
	E1212 19:29:59.409419       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.21.49:443: connect: connection refused" logger="UnhandledError"
	E1212 19:29:59.409439       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.21.49:443: connect: connection refused" logger="UnhandledError"
	W1212 19:29:59.427655       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.21.49:443: connect: connection refused
	E1212 19:29:59.427693       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.21.49:443: connect: connection refused" logger="UnhandledError"
	W1212 19:29:59.431145       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.21.49:443: connect: connection refused
	E1212 19:29:59.431179       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.21.49:443: connect: connection refused" logger="UnhandledError"
	E1212 19:30:02.745436       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.22.180:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.22.180:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.22.180:443: connect: connection refused" logger="UnhandledError"
	W1212 19:30:02.745531       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 19:30:02.745594       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1212 19:30:02.745935       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.22.180:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.22.180:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.22.180:443: connect: connection refused" logger="UnhandledError"
	E1212 19:30:02.751015       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.22.180:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.22.180:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.22.180:443: connect: connection refused" logger="UnhandledError"
	E1212 19:30:02.771552       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.22.180:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.22.180:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.22.180:443: connect: connection refused" logger="UnhandledError"
	E1212 19:30:02.812811       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.22.180:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.22.180:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.22.180:443: connect: connection refused" logger="UnhandledError"
	I1212 19:30:02.920764       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1212 19:31:19.526627       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:42054: use of closed network connection
	E1212 19:31:19.666672       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:42090: use of closed network connection
	I1212 19:31:25.397578       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1212 19:31:25.577901       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.76.36"}
	I1212 19:31:44.706386       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1212 19:33:49.895606       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.15.246"}
	
	
	==> kube-controller-manager [d28712ec6c40914cb6c8e08632c1ae630a5201547999a8288f0c04c8dd6cee58] <==
	I1212 19:29:17.526187       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1212 19:29:17.526253       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1212 19:29:17.526298       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1212 19:29:17.526443       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1212 19:29:17.526532       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1212 19:29:17.526632       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1212 19:29:17.526773       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1212 19:29:17.526779       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1212 19:29:17.526826       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1212 19:29:17.526900       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1212 19:29:17.526913       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1212 19:29:17.526927       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1212 19:29:17.527976       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1212 19:29:17.532171       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1212 19:29:17.534365       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 19:29:17.544505       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1212 19:29:17.548685       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1212 19:29:47.538233       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 19:29:47.538386       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1212 19:29:47.538429       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1212 19:29:47.554951       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1212 19:29:47.558446       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1212 19:29:47.638849       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 19:29:47.659207       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1212 19:30:02.481537       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [dea3cfc0d651ab9de775ab172dfcbe74be7ff2c5ac3aa39aa3a32c2ca09c508a] <==
	I1212 19:29:19.002429       1 server_linux.go:53] "Using iptables proxy"
	I1212 19:29:19.073424       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1212 19:29:19.174545       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1212 19:29:19.174584       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1212 19:29:19.174673       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 19:29:19.393193       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 19:29:19.393377       1 server_linux.go:132] "Using iptables Proxier"
	I1212 19:29:19.515032       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 19:29:19.521601       1 server.go:527] "Version info" version="v1.34.2"
	I1212 19:29:19.521639       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 19:29:19.552738       1 config.go:200] "Starting service config controller"
	I1212 19:29:19.552761       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 19:29:19.552787       1 config.go:106] "Starting endpoint slice config controller"
	I1212 19:29:19.552792       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 19:29:19.552805       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 19:29:19.552809       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 19:29:19.553134       1 config.go:309] "Starting node config controller"
	I1212 19:29:19.553160       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 19:29:19.660852       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1212 19:29:19.660896       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1212 19:29:19.660930       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1212 19:29:19.672665       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [624cd53ac7dff424e3ce412fbf38f28bdb8f59a04c4a399ca5d5f0b1ec4ee746] <==
	E1212 19:29:10.556829       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1212 19:29:10.556943       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1212 19:29:10.556964       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1212 19:29:10.557082       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1212 19:29:10.557121       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1212 19:29:10.558646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1212 19:29:10.558663       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1212 19:29:10.558773       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1212 19:29:10.558807       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1212 19:29:10.558890       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1212 19:29:10.558909       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1212 19:29:10.558999       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1212 19:29:10.559111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1212 19:29:10.559131       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1212 19:29:10.558411       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1212 19:29:10.559357       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1212 19:29:10.559845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1212 19:29:11.386879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1212 19:29:11.432796       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1212 19:29:11.442608       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1212 19:29:11.653799       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1212 19:29:11.689757       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1212 19:29:11.690364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1212 19:29:11.744503       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1212 19:29:13.152621       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 12 19:31:59 addons-410014 kubelet[1276]: I1212 19:31:59.844672    1276 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/2d4a61b9-37f9-4b88-9a58-649a3ed95f05-gcp-creds\") on node \"addons-410014\" DevicePath \"\""
	Dec 12 19:31:59 addons-410014 kubelet[1276]: I1212 19:31:59.846932    1276 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d4a61b9-37f9-4b88-9a58-649a3ed95f05-kube-api-access-9s5nn" (OuterVolumeSpecName: "kube-api-access-9s5nn") pod "2d4a61b9-37f9-4b88-9a58-649a3ed95f05" (UID: "2d4a61b9-37f9-4b88-9a58-649a3ed95f05"). InnerVolumeSpecName "kube-api-access-9s5nn". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 12 19:31:59 addons-410014 kubelet[1276]: I1212 19:31:59.848204    1276 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^363daa0b-d791-11f0-8457-82de27f2f3cb" (OuterVolumeSpecName: "task-pv-storage") pod "2d4a61b9-37f9-4b88-9a58-649a3ed95f05" (UID: "2d4a61b9-37f9-4b88-9a58-649a3ed95f05"). InnerVolumeSpecName "pvc-22087ca9-baf5-4e86-bd94-d79ea4a4a1ee". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Dec 12 19:31:59 addons-410014 kubelet[1276]: I1212 19:31:59.945622    1276 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-22087ca9-baf5-4e86-bd94-d79ea4a4a1ee\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^363daa0b-d791-11f0-8457-82de27f2f3cb\") on node \"addons-410014\" "
	Dec 12 19:31:59 addons-410014 kubelet[1276]: I1212 19:31:59.945646    1276 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9s5nn\" (UniqueName: \"kubernetes.io/projected/2d4a61b9-37f9-4b88-9a58-649a3ed95f05-kube-api-access-9s5nn\") on node \"addons-410014\" DevicePath \"\""
	Dec 12 19:31:59 addons-410014 kubelet[1276]: I1212 19:31:59.949619    1276 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-22087ca9-baf5-4e86-bd94-d79ea4a4a1ee" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^363daa0b-d791-11f0-8457-82de27f2f3cb") on node "addons-410014"
	Dec 12 19:32:00 addons-410014 kubelet[1276]: I1212 19:32:00.046767    1276 reconciler_common.go:299] "Volume detached for volume \"pvc-22087ca9-baf5-4e86-bd94-d79ea4a4a1ee\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^363daa0b-d791-11f0-8457-82de27f2f3cb\") on node \"addons-410014\" DevicePath \"\""
	Dec 12 19:32:00 addons-410014 kubelet[1276]: I1212 19:32:00.191635    1276 scope.go:117] "RemoveContainer" containerID="49e091005140acd0700a5580e51f24538929f61c44047feeebd49d64c4db113a"
	Dec 12 19:32:00 addons-410014 kubelet[1276]: I1212 19:32:00.201129    1276 scope.go:117] "RemoveContainer" containerID="49e091005140acd0700a5580e51f24538929f61c44047feeebd49d64c4db113a"
	Dec 12 19:32:00 addons-410014 kubelet[1276]: E1212 19:32:00.201419    1276 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49e091005140acd0700a5580e51f24538929f61c44047feeebd49d64c4db113a\": container with ID starting with 49e091005140acd0700a5580e51f24538929f61c44047feeebd49d64c4db113a not found: ID does not exist" containerID="49e091005140acd0700a5580e51f24538929f61c44047feeebd49d64c4db113a"
	Dec 12 19:32:00 addons-410014 kubelet[1276]: I1212 19:32:00.201458    1276 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49e091005140acd0700a5580e51f24538929f61c44047feeebd49d64c4db113a"} err="failed to get container status \"49e091005140acd0700a5580e51f24538929f61c44047feeebd49d64c4db113a\": rpc error: code = NotFound desc = could not find container \"49e091005140acd0700a5580e51f24538929f61c44047feeebd49d64c4db113a\": container with ID starting with 49e091005140acd0700a5580e51f24538929f61c44047feeebd49d64c4db113a not found: ID does not exist"
	Dec 12 19:32:00 addons-410014 kubelet[1276]: I1212 19:32:00.600587    1276 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d4a61b9-37f9-4b88-9a58-649a3ed95f05" path="/var/lib/kubelet/pods/2d4a61b9-37f9-4b88-9a58-649a3ed95f05/volumes"
	Dec 12 19:32:02 addons-410014 kubelet[1276]: E1212 19:32:02.423551    1276 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-j88nd" podUID="8b8f037f-1ba2-44dc-90c5-1575e1dc8c8b"
	Dec 12 19:32:09 addons-410014 kubelet[1276]: I1212 19:32:09.598235    1276 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-t98v8" secret="" err="secret \"gcp-auth\" not found"
	Dec 12 19:32:12 addons-410014 kubelet[1276]: I1212 19:32:12.590597    1276 scope.go:117] "RemoveContainer" containerID="2548f4aa8c65a26be0c99c5f848e2527d4633b6ed14ed7394c0930f15e507251"
	Dec 12 19:32:12 addons-410014 kubelet[1276]: I1212 19:32:12.598355    1276 scope.go:117] "RemoveContainer" containerID="2cf61d3d6eb32365be5ffc0a57c9890e6e04cea0466fc4f7b13751233400475a"
	Dec 12 19:32:12 addons-410014 kubelet[1276]: I1212 19:32:12.605976    1276 scope.go:117] "RemoveContainer" containerID="69c83a9d443b04d297077fe8202aac418c9b42029786c6b5061070b0d347a6e9"
	Dec 12 19:32:12 addons-410014 kubelet[1276]: I1212 19:32:12.612330    1276 scope.go:117] "RemoveContainer" containerID="ab18423c7de6e29b0753eeced61c238ec2e0f8923260c513ef53306209e1d6c1"
	Dec 12 19:32:18 addons-410014 kubelet[1276]: I1212 19:32:18.273911    1276 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-creds-764b6fb674-j88nd" podStartSLOduration=178.00561652 podStartE2EDuration="2m59.273891803s" podCreationTimestamp="2025-12-12 19:29:19 +0000 UTC" firstStartedPulling="2025-12-12 19:32:16.618749659 +0000 UTC m=+184.109086863" lastFinishedPulling="2025-12-12 19:32:17.887024938 +0000 UTC m=+185.377362146" observedRunningTime="2025-12-12 19:32:18.272703588 +0000 UTC m=+185.763040816" watchObservedRunningTime="2025-12-12 19:32:18.273891803 +0000 UTC m=+185.764229028"
	Dec 12 19:32:23 addons-410014 kubelet[1276]: I1212 19:32:23.597336    1276 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-qvjjb" secret="" err="secret \"gcp-auth\" not found"
	Dec 12 19:33:25 addons-410014 kubelet[1276]: I1212 19:33:25.597248    1276 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-5lrqf" secret="" err="secret \"gcp-auth\" not found"
	Dec 12 19:33:31 addons-410014 kubelet[1276]: I1212 19:33:31.597394    1276 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-t98v8" secret="" err="secret \"gcp-auth\" not found"
	Dec 12 19:33:35 addons-410014 kubelet[1276]: I1212 19:33:35.597813    1276 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-qvjjb" secret="" err="secret \"gcp-auth\" not found"
	Dec 12 19:33:49 addons-410014 kubelet[1276]: I1212 19:33:49.953720    1276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5xrm\" (UniqueName: \"kubernetes.io/projected/b378818a-9e5c-4954-8b60-d4111daccd62-kube-api-access-z5xrm\") pod \"hello-world-app-5d498dc89-w94dm\" (UID: \"b378818a-9e5c-4954-8b60-d4111daccd62\") " pod="default/hello-world-app-5d498dc89-w94dm"
	Dec 12 19:33:49 addons-410014 kubelet[1276]: I1212 19:33:49.953978    1276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b378818a-9e5c-4954-8b60-d4111daccd62-gcp-creds\") pod \"hello-world-app-5d498dc89-w94dm\" (UID: \"b378818a-9e5c-4954-8b60-d4111daccd62\") " pod="default/hello-world-app-5d498dc89-w94dm"
	
	
	==> storage-provisioner [30de3e37db155bdf4e8d95d19b3266cd29f007ba5c66fdf65486edcc3d9fd8f3] <==
	W1212 19:33:26.572567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:33:28.575294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:33:28.578630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:33:30.581499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:33:30.585939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:33:32.588412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:33:32.592801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:33:34.595634       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:33:34.599210       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:33:36.604523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:33:36.607813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:33:38.610176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:33:38.615368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:33:40.618137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:33:40.621569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:33:42.624230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:33:42.627595       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:33:44.630100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:33:44.634777       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:33:46.637138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:33:46.640176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:33:48.643820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:33:48.647753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:33:50.651176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:33:50.654400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-410014 -n addons-410014
helpers_test.go:270: (dbg) Run:  kubectl --context addons-410014 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-nc25l ingress-nginx-admission-patch-k6zbn
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-410014 describe pod ingress-nginx-admission-create-nc25l ingress-nginx-admission-patch-k6zbn
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-410014 describe pod ingress-nginx-admission-create-nc25l ingress-nginx-admission-patch-k6zbn: exit status 1 (54.167202ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-nc25l" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-k6zbn" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-410014 describe pod ingress-nginx-admission-create-nc25l ingress-nginx-admission-patch-k6zbn: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-410014 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-410014 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (227.186909ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 19:33:52.232907   25138 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:33:52.233055   25138 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:33:52.233066   25138 out.go:374] Setting ErrFile to fd 2...
	I1212 19:33:52.233070   25138 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:33:52.233293   25138 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 19:33:52.233539   25138 mustload.go:66] Loading cluster: addons-410014
	I1212 19:33:52.233838   25138 config.go:182] Loaded profile config "addons-410014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:33:52.233856   25138 addons.go:622] checking whether the cluster is paused
	I1212 19:33:52.233932   25138 config.go:182] Loaded profile config "addons-410014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:33:52.233944   25138 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:33:52.234299   25138 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:33:52.251733   25138 ssh_runner.go:195] Run: systemctl --version
	I1212 19:33:52.251780   25138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:33:52.267882   25138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:33:52.359375   25138 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 19:33:52.359446   25138 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 19:33:52.386597   25138 cri.go:89] found id: "81a4778d8c23e5f9711733cd35b95062e07aa5d20c1deccfd2ec9eb8277b89e7"
	I1212 19:33:52.386624   25138 cri.go:89] found id: "76571c6136b47298f32a0b1ec18cc99af7b389e2b5dbe495f1629d5e2703e8de"
	I1212 19:33:52.386630   25138 cri.go:89] found id: "7f1863e417224e030ce9a9d2e2f7136cbd19e03c826aa5403bad131c91dc9b93"
	I1212 19:33:52.386633   25138 cri.go:89] found id: "5cd7aec5d9bbe817f190772e730768355d87901e7fc80e8a00143cfda71f1d24"
	I1212 19:33:52.386642   25138 cri.go:89] found id: "9d3792f63458445175e10a14dca33e6972f1836499b2b34aec58251c1aae25fd"
	I1212 19:33:52.386650   25138 cri.go:89] found id: "e9263571afd91c7084801115b897309fd4e916d9659237639dc9fac1f242fef2"
	I1212 19:33:52.386654   25138 cri.go:89] found id: "cb6005d68d9a96016741ff117661915a95e429ed8aef7fc5832c7b9335aaf7c7"
	I1212 19:33:52.386659   25138 cri.go:89] found id: "24cd917601d91959bf6581b8065f18f069164f1f6db8c6dada400521a951cfd4"
	I1212 19:33:52.386669   25138 cri.go:89] found id: "3693e2f08cab47eb8561d548d6df8eed556b0c840ae77214117a4018d1ef6385"
	I1212 19:33:52.386684   25138 cri.go:89] found id: "d28f0bff28d4accb708ad0b723f3d8b457fae7a4905c32e5e61779549873d5fc"
	I1212 19:33:52.386694   25138 cri.go:89] found id: "355df816d72de65cb71104e428e2341e612b893f1f784bd69a95e8b4d25b2f15"
	I1212 19:33:52.386699   25138 cri.go:89] found id: "5378136ec2be93b30b5623db6080933a844f5c1ee7ab34812921d4ca12a2ec39"
	I1212 19:33:52.386703   25138 cri.go:89] found id: "448e73e6cab5275217215830ac10a0c4029a9e407db13c1c6352947a6c77b9f5"
	I1212 19:33:52.386707   25138 cri.go:89] found id: "f6b724bf055e80101eea67abe6cf1548be806ad8088ad9e482367bfb38836269"
	I1212 19:33:52.386711   25138 cri.go:89] found id: "d5470be0baf62349e9e352868eba4d0011d02e21667dbb5d805f2985be00ccdc"
	I1212 19:33:52.386722   25138 cri.go:89] found id: "54eea5a21a6fd70fe1e9d45f9a1a45437a77811fe92301178f0b99e163b4c669"
	I1212 19:33:52.386729   25138 cri.go:89] found id: "203522604a8b916bc3a950e033ca9bb281723c416d6c4583c96dbe77e514160d"
	I1212 19:33:52.386734   25138 cri.go:89] found id: "31bb87c8f5b440ef6f409980279d69a279ee8873f638c558c1455812526d42f1"
	I1212 19:33:52.386736   25138 cri.go:89] found id: "30de3e37db155bdf4e8d95d19b3266cd29f007ba5c66fdf65486edcc3d9fd8f3"
	I1212 19:33:52.386739   25138 cri.go:89] found id: "57cc761c4f0a4a4897b6a36f7a51c07f8f9b9d7923145c3005d02d41381917ca"
	I1212 19:33:52.386744   25138 cri.go:89] found id: "dea3cfc0d651ab9de775ab172dfcbe74be7ff2c5ac3aa39aa3a32c2ca09c508a"
	I1212 19:33:52.386747   25138 cri.go:89] found id: "d28712ec6c40914cb6c8e08632c1ae630a5201547999a8288f0c04c8dd6cee58"
	I1212 19:33:52.386750   25138 cri.go:89] found id: "22824f98cf9f933f80e85a004433640955a438da23b09eee368ae6a14f2c12f2"
	I1212 19:33:52.386753   25138 cri.go:89] found id: "624cd53ac7dff424e3ce412fbf38f28bdb8f59a04c4a399ca5d5f0b1ec4ee746"
	I1212 19:33:52.386758   25138 cri.go:89] found id: "6fb90e1241345ef6183ebbe960206cf20dfe7d2b12dc3ca243063ca515dd58b1"
	I1212 19:33:52.386760   25138 cri.go:89] found id: ""
	I1212 19:33:52.386810   25138 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 19:33:52.399614   25138 out.go:203] 
	W1212 19:33:52.400714   25138 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T19:33:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T19:33:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 19:33:52.400730   25138 out.go:285] * 
	* 
	W1212 19:33:52.403669   25138 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 19:33:52.404741   25138 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-410014 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-410014 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-410014 addons disable ingress --alsologtostderr -v=1: exit status 11 (228.920624ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 19:33:52.459901   25200 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:33:52.460044   25200 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:33:52.460053   25200 out.go:374] Setting ErrFile to fd 2...
	I1212 19:33:52.460057   25200 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:33:52.460290   25200 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 19:33:52.460515   25200 mustload.go:66] Loading cluster: addons-410014
	I1212 19:33:52.460863   25200 config.go:182] Loaded profile config "addons-410014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:33:52.460883   25200 addons.go:622] checking whether the cluster is paused
	I1212 19:33:52.460966   25200 config.go:182] Loaded profile config "addons-410014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:33:52.460977   25200 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:33:52.461308   25200 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:33:52.479089   25200 ssh_runner.go:195] Run: systemctl --version
	I1212 19:33:52.479126   25200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:33:52.495854   25200 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:33:52.587662   25200 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 19:33:52.587733   25200 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 19:33:52.615433   25200 cri.go:89] found id: "81a4778d8c23e5f9711733cd35b95062e07aa5d20c1deccfd2ec9eb8277b89e7"
	I1212 19:33:52.615450   25200 cri.go:89] found id: "76571c6136b47298f32a0b1ec18cc99af7b389e2b5dbe495f1629d5e2703e8de"
	I1212 19:33:52.615453   25200 cri.go:89] found id: "7f1863e417224e030ce9a9d2e2f7136cbd19e03c826aa5403bad131c91dc9b93"
	I1212 19:33:52.615456   25200 cri.go:89] found id: "5cd7aec5d9bbe817f190772e730768355d87901e7fc80e8a00143cfda71f1d24"
	I1212 19:33:52.615460   25200 cri.go:89] found id: "9d3792f63458445175e10a14dca33e6972f1836499b2b34aec58251c1aae25fd"
	I1212 19:33:52.615463   25200 cri.go:89] found id: "e9263571afd91c7084801115b897309fd4e916d9659237639dc9fac1f242fef2"
	I1212 19:33:52.615465   25200 cri.go:89] found id: "cb6005d68d9a96016741ff117661915a95e429ed8aef7fc5832c7b9335aaf7c7"
	I1212 19:33:52.615468   25200 cri.go:89] found id: "24cd917601d91959bf6581b8065f18f069164f1f6db8c6dada400521a951cfd4"
	I1212 19:33:52.615471   25200 cri.go:89] found id: "3693e2f08cab47eb8561d548d6df8eed556b0c840ae77214117a4018d1ef6385"
	I1212 19:33:52.615477   25200 cri.go:89] found id: "d28f0bff28d4accb708ad0b723f3d8b457fae7a4905c32e5e61779549873d5fc"
	I1212 19:33:52.615482   25200 cri.go:89] found id: "355df816d72de65cb71104e428e2341e612b893f1f784bd69a95e8b4d25b2f15"
	I1212 19:33:52.615486   25200 cri.go:89] found id: "5378136ec2be93b30b5623db6080933a844f5c1ee7ab34812921d4ca12a2ec39"
	I1212 19:33:52.615490   25200 cri.go:89] found id: "448e73e6cab5275217215830ac10a0c4029a9e407db13c1c6352947a6c77b9f5"
	I1212 19:33:52.615495   25200 cri.go:89] found id: "f6b724bf055e80101eea67abe6cf1548be806ad8088ad9e482367bfb38836269"
	I1212 19:33:52.615499   25200 cri.go:89] found id: "d5470be0baf62349e9e352868eba4d0011d02e21667dbb5d805f2985be00ccdc"
	I1212 19:33:52.615511   25200 cri.go:89] found id: "54eea5a21a6fd70fe1e9d45f9a1a45437a77811fe92301178f0b99e163b4c669"
	I1212 19:33:52.615519   25200 cri.go:89] found id: "203522604a8b916bc3a950e033ca9bb281723c416d6c4583c96dbe77e514160d"
	I1212 19:33:52.615523   25200 cri.go:89] found id: "31bb87c8f5b440ef6f409980279d69a279ee8873f638c558c1455812526d42f1"
	I1212 19:33:52.615526   25200 cri.go:89] found id: "30de3e37db155bdf4e8d95d19b3266cd29f007ba5c66fdf65486edcc3d9fd8f3"
	I1212 19:33:52.615529   25200 cri.go:89] found id: "57cc761c4f0a4a4897b6a36f7a51c07f8f9b9d7923145c3005d02d41381917ca"
	I1212 19:33:52.615534   25200 cri.go:89] found id: "dea3cfc0d651ab9de775ab172dfcbe74be7ff2c5ac3aa39aa3a32c2ca09c508a"
	I1212 19:33:52.615537   25200 cri.go:89] found id: "d28712ec6c40914cb6c8e08632c1ae630a5201547999a8288f0c04c8dd6cee58"
	I1212 19:33:52.615539   25200 cri.go:89] found id: "22824f98cf9f933f80e85a004433640955a438da23b09eee368ae6a14f2c12f2"
	I1212 19:33:52.615542   25200 cri.go:89] found id: "624cd53ac7dff424e3ce412fbf38f28bdb8f59a04c4a399ca5d5f0b1ec4ee746"
	I1212 19:33:52.615550   25200 cri.go:89] found id: "6fb90e1241345ef6183ebbe960206cf20dfe7d2b12dc3ca243063ca515dd58b1"
	I1212 19:33:52.615552   25200 cri.go:89] found id: ""
	I1212 19:33:52.615592   25200 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 19:33:52.628782   25200 out.go:203] 
	W1212 19:33:52.629884   25200 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T19:33:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T19:33:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 19:33:52.629902   25200 out.go:285] * 
	* 
	W1212 19:33:52.632799   25200 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 19:33:52.634001   25200 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-410014 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (147.49s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-pd42c" [0f3b6685-4fe8-485f-a033-faddd5149373] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003102921s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-410014 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-410014 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (246.799843ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 19:31:24.963829   20252 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:31:24.964179   20252 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:31:24.964193   20252 out.go:374] Setting ErrFile to fd 2...
	I1212 19:31:24.964200   20252 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:31:24.964398   20252 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 19:31:24.964694   20252 mustload.go:66] Loading cluster: addons-410014
	I1212 19:31:24.965022   20252 config.go:182] Loaded profile config "addons-410014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:31:24.965050   20252 addons.go:622] checking whether the cluster is paused
	I1212 19:31:24.965158   20252 config.go:182] Loaded profile config "addons-410014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:31:24.965172   20252 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:31:24.965580   20252 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:31:24.985407   20252 ssh_runner.go:195] Run: systemctl --version
	I1212 19:31:24.985454   20252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:31:25.007238   20252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:31:25.102298   20252 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 19:31:25.102367   20252 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 19:31:25.129995   20252 cri.go:89] found id: "76571c6136b47298f32a0b1ec18cc99af7b389e2b5dbe495f1629d5e2703e8de"
	I1212 19:31:25.130030   20252 cri.go:89] found id: "7f1863e417224e030ce9a9d2e2f7136cbd19e03c826aa5403bad131c91dc9b93"
	I1212 19:31:25.130037   20252 cri.go:89] found id: "5cd7aec5d9bbe817f190772e730768355d87901e7fc80e8a00143cfda71f1d24"
	I1212 19:31:25.130043   20252 cri.go:89] found id: "9d3792f63458445175e10a14dca33e6972f1836499b2b34aec58251c1aae25fd"
	I1212 19:31:25.130048   20252 cri.go:89] found id: "e9263571afd91c7084801115b897309fd4e916d9659237639dc9fac1f242fef2"
	I1212 19:31:25.130054   20252 cri.go:89] found id: "cb6005d68d9a96016741ff117661915a95e429ed8aef7fc5832c7b9335aaf7c7"
	I1212 19:31:25.130058   20252 cri.go:89] found id: "24cd917601d91959bf6581b8065f18f069164f1f6db8c6dada400521a951cfd4"
	I1212 19:31:25.130061   20252 cri.go:89] found id: "3693e2f08cab47eb8561d548d6df8eed556b0c840ae77214117a4018d1ef6385"
	I1212 19:31:25.130064   20252 cri.go:89] found id: "d28f0bff28d4accb708ad0b723f3d8b457fae7a4905c32e5e61779549873d5fc"
	I1212 19:31:25.130075   20252 cri.go:89] found id: "355df816d72de65cb71104e428e2341e612b893f1f784bd69a95e8b4d25b2f15"
	I1212 19:31:25.130082   20252 cri.go:89] found id: "5378136ec2be93b30b5623db6080933a844f5c1ee7ab34812921d4ca12a2ec39"
	I1212 19:31:25.130087   20252 cri.go:89] found id: "448e73e6cab5275217215830ac10a0c4029a9e407db13c1c6352947a6c77b9f5"
	I1212 19:31:25.130092   20252 cri.go:89] found id: "f6b724bf055e80101eea67abe6cf1548be806ad8088ad9e482367bfb38836269"
	I1212 19:31:25.130097   20252 cri.go:89] found id: "d5470be0baf62349e9e352868eba4d0011d02e21667dbb5d805f2985be00ccdc"
	I1212 19:31:25.130102   20252 cri.go:89] found id: "54eea5a21a6fd70fe1e9d45f9a1a45437a77811fe92301178f0b99e163b4c669"
	I1212 19:31:25.130124   20252 cri.go:89] found id: "203522604a8b916bc3a950e033ca9bb281723c416d6c4583c96dbe77e514160d"
	I1212 19:31:25.130132   20252 cri.go:89] found id: "31bb87c8f5b440ef6f409980279d69a279ee8873f638c558c1455812526d42f1"
	I1212 19:31:25.130138   20252 cri.go:89] found id: "30de3e37db155bdf4e8d95d19b3266cd29f007ba5c66fdf65486edcc3d9fd8f3"
	I1212 19:31:25.130142   20252 cri.go:89] found id: "57cc761c4f0a4a4897b6a36f7a51c07f8f9b9d7923145c3005d02d41381917ca"
	I1212 19:31:25.130146   20252 cri.go:89] found id: "dea3cfc0d651ab9de775ab172dfcbe74be7ff2c5ac3aa39aa3a32c2ca09c508a"
	I1212 19:31:25.130148   20252 cri.go:89] found id: "d28712ec6c40914cb6c8e08632c1ae630a5201547999a8288f0c04c8dd6cee58"
	I1212 19:31:25.130151   20252 cri.go:89] found id: "22824f98cf9f933f80e85a004433640955a438da23b09eee368ae6a14f2c12f2"
	I1212 19:31:25.130154   20252 cri.go:89] found id: "624cd53ac7dff424e3ce412fbf38f28bdb8f59a04c4a399ca5d5f0b1ec4ee746"
	I1212 19:31:25.130156   20252 cri.go:89] found id: "6fb90e1241345ef6183ebbe960206cf20dfe7d2b12dc3ca243063ca515dd58b1"
	I1212 19:31:25.130159   20252 cri.go:89] found id: ""
	I1212 19:31:25.130205   20252 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 19:31:25.143264   20252 out.go:203] 
	W1212 19:31:25.144445   20252 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T19:31:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T19:31:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 19:31:25.144470   20252 out.go:285] * 
	* 
	W1212 19:31:25.147320   20252 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 19:31:25.148480   20252 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-410014 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.25s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.32s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 2.931333ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-kh47q" [3cdc089b-338d-4aa8-95a4-b5ede11fe1b2] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.002203525s
addons_test.go:465: (dbg) Run:  kubectl --context addons-410014 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-410014 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-410014 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (245.299121ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 19:31:25.029063   20290 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:31:25.029211   20290 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:31:25.029222   20290 out.go:374] Setting ErrFile to fd 2...
	I1212 19:31:25.029229   20290 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:31:25.029461   20290 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 19:31:25.029737   20290 mustload.go:66] Loading cluster: addons-410014
	I1212 19:31:25.030075   20290 config.go:182] Loaded profile config "addons-410014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:31:25.030098   20290 addons.go:622] checking whether the cluster is paused
	I1212 19:31:25.030194   20290 config.go:182] Loaded profile config "addons-410014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:31:25.030209   20290 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:31:25.030653   20290 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:31:25.050170   20290 ssh_runner.go:195] Run: systemctl --version
	I1212 19:31:25.050227   20290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:31:25.067779   20290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:31:25.161562   20290 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 19:31:25.161623   20290 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 19:31:25.190265   20290 cri.go:89] found id: "76571c6136b47298f32a0b1ec18cc99af7b389e2b5dbe495f1629d5e2703e8de"
	I1212 19:31:25.190311   20290 cri.go:89] found id: "7f1863e417224e030ce9a9d2e2f7136cbd19e03c826aa5403bad131c91dc9b93"
	I1212 19:31:25.190318   20290 cri.go:89] found id: "5cd7aec5d9bbe817f190772e730768355d87901e7fc80e8a00143cfda71f1d24"
	I1212 19:31:25.190323   20290 cri.go:89] found id: "9d3792f63458445175e10a14dca33e6972f1836499b2b34aec58251c1aae25fd"
	I1212 19:31:25.190326   20290 cri.go:89] found id: "e9263571afd91c7084801115b897309fd4e916d9659237639dc9fac1f242fef2"
	I1212 19:31:25.190334   20290 cri.go:89] found id: "cb6005d68d9a96016741ff117661915a95e429ed8aef7fc5832c7b9335aaf7c7"
	I1212 19:31:25.190337   20290 cri.go:89] found id: "24cd917601d91959bf6581b8065f18f069164f1f6db8c6dada400521a951cfd4"
	I1212 19:31:25.190340   20290 cri.go:89] found id: "3693e2f08cab47eb8561d548d6df8eed556b0c840ae77214117a4018d1ef6385"
	I1212 19:31:25.190343   20290 cri.go:89] found id: "d28f0bff28d4accb708ad0b723f3d8b457fae7a4905c32e5e61779549873d5fc"
	I1212 19:31:25.190359   20290 cri.go:89] found id: "355df816d72de65cb71104e428e2341e612b893f1f784bd69a95e8b4d25b2f15"
	I1212 19:31:25.190365   20290 cri.go:89] found id: "5378136ec2be93b30b5623db6080933a844f5c1ee7ab34812921d4ca12a2ec39"
	I1212 19:31:25.190368   20290 cri.go:89] found id: "448e73e6cab5275217215830ac10a0c4029a9e407db13c1c6352947a6c77b9f5"
	I1212 19:31:25.190371   20290 cri.go:89] found id: "f6b724bf055e80101eea67abe6cf1548be806ad8088ad9e482367bfb38836269"
	I1212 19:31:25.190374   20290 cri.go:89] found id: "d5470be0baf62349e9e352868eba4d0011d02e21667dbb5d805f2985be00ccdc"
	I1212 19:31:25.190376   20290 cri.go:89] found id: "54eea5a21a6fd70fe1e9d45f9a1a45437a77811fe92301178f0b99e163b4c669"
	I1212 19:31:25.190388   20290 cri.go:89] found id: "203522604a8b916bc3a950e033ca9bb281723c416d6c4583c96dbe77e514160d"
	I1212 19:31:25.190395   20290 cri.go:89] found id: "31bb87c8f5b440ef6f409980279d69a279ee8873f638c558c1455812526d42f1"
	I1212 19:31:25.190399   20290 cri.go:89] found id: "30de3e37db155bdf4e8d95d19b3266cd29f007ba5c66fdf65486edcc3d9fd8f3"
	I1212 19:31:25.190402   20290 cri.go:89] found id: "57cc761c4f0a4a4897b6a36f7a51c07f8f9b9d7923145c3005d02d41381917ca"
	I1212 19:31:25.190404   20290 cri.go:89] found id: "dea3cfc0d651ab9de775ab172dfcbe74be7ff2c5ac3aa39aa3a32c2ca09c508a"
	I1212 19:31:25.190410   20290 cri.go:89] found id: "d28712ec6c40914cb6c8e08632c1ae630a5201547999a8288f0c04c8dd6cee58"
	I1212 19:31:25.190413   20290 cri.go:89] found id: "22824f98cf9f933f80e85a004433640955a438da23b09eee368ae6a14f2c12f2"
	I1212 19:31:25.190415   20290 cri.go:89] found id: "624cd53ac7dff424e3ce412fbf38f28bdb8f59a04c4a399ca5d5f0b1ec4ee746"
	I1212 19:31:25.190418   20290 cri.go:89] found id: "6fb90e1241345ef6183ebbe960206cf20dfe7d2b12dc3ca243063ca515dd58b1"
	I1212 19:31:25.190421   20290 cri.go:89] found id: ""
	I1212 19:31:25.190476   20290 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 19:31:25.205289   20290 out.go:203] 
	W1212 19:31:25.206436   20290 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T19:31:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T19:31:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 19:31:25.206454   20290 out.go:285] * 
	* 
	W1212 19:31:25.210805   20290 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 19:31:25.212520   20290 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-410014 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.32s)

                                                
                                    
x
+
TestAddons/parallel/CSI (38.68s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1212 19:31:22.321376    9254 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1212 19:31:22.324611    9254 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1212 19:31:22.324632    9254 kapi.go:107] duration metric: took 3.269769ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 3.278766ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-410014 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410014 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410014 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410014 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410014 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410014 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410014 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410014 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410014 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410014 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410014 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410014 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410014 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410014 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410014 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410014 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410014 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-410014 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [22842a06-fd45-4400-a82f-703df630656c] Pending
helpers_test.go:353: "task-pv-pod" [22842a06-fd45-4400-a82f-703df630656c] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.002826992s
addons_test.go:574: (dbg) Run:  kubectl --context addons-410014 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-410014 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-410014 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-410014 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-410014 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-410014 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410014 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410014 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410014 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410014 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410014 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410014 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410014 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410014 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-410014 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [2d4a61b9-37f9-4b88-9a58-649a3ed95f05] Pending
helpers_test.go:353: "task-pv-pod-restore" [2d4a61b9-37f9-4b88-9a58-649a3ed95f05] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 6.003515693s
addons_test.go:616: (dbg) Run:  kubectl --context addons-410014 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-410014 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-410014 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-410014 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-410014 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (236.962384ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 19:32:00.582364   22854 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:32:00.582656   22854 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:32:00.582666   22854 out.go:374] Setting ErrFile to fd 2...
	I1212 19:32:00.582670   22854 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:32:00.582936   22854 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 19:32:00.583231   22854 mustload.go:66] Loading cluster: addons-410014
	I1212 19:32:00.583605   22854 config.go:182] Loaded profile config "addons-410014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:32:00.583641   22854 addons.go:622] checking whether the cluster is paused
	I1212 19:32:00.583748   22854 config.go:182] Loaded profile config "addons-410014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:32:00.583763   22854 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:32:00.584109   22854 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:32:00.602493   22854 ssh_runner.go:195] Run: systemctl --version
	I1212 19:32:00.602545   22854 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:32:00.619915   22854 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:32:00.712970   22854 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 19:32:00.713061   22854 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 19:32:00.741372   22854 cri.go:89] found id: "76571c6136b47298f32a0b1ec18cc99af7b389e2b5dbe495f1629d5e2703e8de"
	I1212 19:32:00.741398   22854 cri.go:89] found id: "7f1863e417224e030ce9a9d2e2f7136cbd19e03c826aa5403bad131c91dc9b93"
	I1212 19:32:00.741402   22854 cri.go:89] found id: "5cd7aec5d9bbe817f190772e730768355d87901e7fc80e8a00143cfda71f1d24"
	I1212 19:32:00.741406   22854 cri.go:89] found id: "9d3792f63458445175e10a14dca33e6972f1836499b2b34aec58251c1aae25fd"
	I1212 19:32:00.741409   22854 cri.go:89] found id: "e9263571afd91c7084801115b897309fd4e916d9659237639dc9fac1f242fef2"
	I1212 19:32:00.741413   22854 cri.go:89] found id: "cb6005d68d9a96016741ff117661915a95e429ed8aef7fc5832c7b9335aaf7c7"
	I1212 19:32:00.741416   22854 cri.go:89] found id: "24cd917601d91959bf6581b8065f18f069164f1f6db8c6dada400521a951cfd4"
	I1212 19:32:00.741419   22854 cri.go:89] found id: "3693e2f08cab47eb8561d548d6df8eed556b0c840ae77214117a4018d1ef6385"
	I1212 19:32:00.741421   22854 cri.go:89] found id: "d28f0bff28d4accb708ad0b723f3d8b457fae7a4905c32e5e61779549873d5fc"
	I1212 19:32:00.741440   22854 cri.go:89] found id: "355df816d72de65cb71104e428e2341e612b893f1f784bd69a95e8b4d25b2f15"
	I1212 19:32:00.741443   22854 cri.go:89] found id: "5378136ec2be93b30b5623db6080933a844f5c1ee7ab34812921d4ca12a2ec39"
	I1212 19:32:00.741446   22854 cri.go:89] found id: "448e73e6cab5275217215830ac10a0c4029a9e407db13c1c6352947a6c77b9f5"
	I1212 19:32:00.741450   22854 cri.go:89] found id: "f6b724bf055e80101eea67abe6cf1548be806ad8088ad9e482367bfb38836269"
	I1212 19:32:00.741453   22854 cri.go:89] found id: "d5470be0baf62349e9e352868eba4d0011d02e21667dbb5d805f2985be00ccdc"
	I1212 19:32:00.741456   22854 cri.go:89] found id: "54eea5a21a6fd70fe1e9d45f9a1a45437a77811fe92301178f0b99e163b4c669"
	I1212 19:32:00.741470   22854 cri.go:89] found id: "203522604a8b916bc3a950e033ca9bb281723c416d6c4583c96dbe77e514160d"
	I1212 19:32:00.741477   22854 cri.go:89] found id: "31bb87c8f5b440ef6f409980279d69a279ee8873f638c558c1455812526d42f1"
	I1212 19:32:00.741482   22854 cri.go:89] found id: "30de3e37db155bdf4e8d95d19b3266cd29f007ba5c66fdf65486edcc3d9fd8f3"
	I1212 19:32:00.741485   22854 cri.go:89] found id: "57cc761c4f0a4a4897b6a36f7a51c07f8f9b9d7923145c3005d02d41381917ca"
	I1212 19:32:00.741488   22854 cri.go:89] found id: "dea3cfc0d651ab9de775ab172dfcbe74be7ff2c5ac3aa39aa3a32c2ca09c508a"
	I1212 19:32:00.741490   22854 cri.go:89] found id: "d28712ec6c40914cb6c8e08632c1ae630a5201547999a8288f0c04c8dd6cee58"
	I1212 19:32:00.741493   22854 cri.go:89] found id: "22824f98cf9f933f80e85a004433640955a438da23b09eee368ae6a14f2c12f2"
	I1212 19:32:00.741499   22854 cri.go:89] found id: "624cd53ac7dff424e3ce412fbf38f28bdb8f59a04c4a399ca5d5f0b1ec4ee746"
	I1212 19:32:00.741501   22854 cri.go:89] found id: "6fb90e1241345ef6183ebbe960206cf20dfe7d2b12dc3ca243063ca515dd58b1"
	I1212 19:32:00.741505   22854 cri.go:89] found id: ""
	I1212 19:32:00.741573   22854 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 19:32:00.756817   22854 out.go:203] 
	W1212 19:32:00.758011   22854 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T19:32:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T19:32:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 19:32:00.758026   22854 out.go:285] * 
	* 
	W1212 19:32:00.760973   22854 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 19:32:00.762153   22854 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-410014 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-410014 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-410014 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (230.981822ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 19:32:00.819257   22918 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:32:00.819440   22918 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:32:00.819451   22918 out.go:374] Setting ErrFile to fd 2...
	I1212 19:32:00.819455   22918 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:32:00.819613   22918 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 19:32:00.819866   22918 mustload.go:66] Loading cluster: addons-410014
	I1212 19:32:00.820200   22918 config.go:182] Loaded profile config "addons-410014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:32:00.820218   22918 addons.go:622] checking whether the cluster is paused
	I1212 19:32:00.820309   22918 config.go:182] Loaded profile config "addons-410014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:32:00.820322   22918 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:32:00.820788   22918 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:32:00.837754   22918 ssh_runner.go:195] Run: systemctl --version
	I1212 19:32:00.837793   22918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:32:00.854139   22918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:32:00.946516   22918 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 19:32:00.946603   22918 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 19:32:00.973849   22918 cri.go:89] found id: "76571c6136b47298f32a0b1ec18cc99af7b389e2b5dbe495f1629d5e2703e8de"
	I1212 19:32:00.973873   22918 cri.go:89] found id: "7f1863e417224e030ce9a9d2e2f7136cbd19e03c826aa5403bad131c91dc9b93"
	I1212 19:32:00.973879   22918 cri.go:89] found id: "5cd7aec5d9bbe817f190772e730768355d87901e7fc80e8a00143cfda71f1d24"
	I1212 19:32:00.973885   22918 cri.go:89] found id: "9d3792f63458445175e10a14dca33e6972f1836499b2b34aec58251c1aae25fd"
	I1212 19:32:00.973891   22918 cri.go:89] found id: "e9263571afd91c7084801115b897309fd4e916d9659237639dc9fac1f242fef2"
	I1212 19:32:00.973896   22918 cri.go:89] found id: "cb6005d68d9a96016741ff117661915a95e429ed8aef7fc5832c7b9335aaf7c7"
	I1212 19:32:00.973900   22918 cri.go:89] found id: "24cd917601d91959bf6581b8065f18f069164f1f6db8c6dada400521a951cfd4"
	I1212 19:32:00.973905   22918 cri.go:89] found id: "3693e2f08cab47eb8561d548d6df8eed556b0c840ae77214117a4018d1ef6385"
	I1212 19:32:00.973910   22918 cri.go:89] found id: "d28f0bff28d4accb708ad0b723f3d8b457fae7a4905c32e5e61779549873d5fc"
	I1212 19:32:00.973918   22918 cri.go:89] found id: "355df816d72de65cb71104e428e2341e612b893f1f784bd69a95e8b4d25b2f15"
	I1212 19:32:00.973923   22918 cri.go:89] found id: "5378136ec2be93b30b5623db6080933a844f5c1ee7ab34812921d4ca12a2ec39"
	I1212 19:32:00.973928   22918 cri.go:89] found id: "448e73e6cab5275217215830ac10a0c4029a9e407db13c1c6352947a6c77b9f5"
	I1212 19:32:00.973934   22918 cri.go:89] found id: "f6b724bf055e80101eea67abe6cf1548be806ad8088ad9e482367bfb38836269"
	I1212 19:32:00.973939   22918 cri.go:89] found id: "d5470be0baf62349e9e352868eba4d0011d02e21667dbb5d805f2985be00ccdc"
	I1212 19:32:00.973946   22918 cri.go:89] found id: "54eea5a21a6fd70fe1e9d45f9a1a45437a77811fe92301178f0b99e163b4c669"
	I1212 19:32:00.973955   22918 cri.go:89] found id: "203522604a8b916bc3a950e033ca9bb281723c416d6c4583c96dbe77e514160d"
	I1212 19:32:00.973963   22918 cri.go:89] found id: "31bb87c8f5b440ef6f409980279d69a279ee8873f638c558c1455812526d42f1"
	I1212 19:32:00.973968   22918 cri.go:89] found id: "30de3e37db155bdf4e8d95d19b3266cd29f007ba5c66fdf65486edcc3d9fd8f3"
	I1212 19:32:00.973972   22918 cri.go:89] found id: "57cc761c4f0a4a4897b6a36f7a51c07f8f9b9d7923145c3005d02d41381917ca"
	I1212 19:32:00.973976   22918 cri.go:89] found id: "dea3cfc0d651ab9de775ab172dfcbe74be7ff2c5ac3aa39aa3a32c2ca09c508a"
	I1212 19:32:00.973985   22918 cri.go:89] found id: "d28712ec6c40914cb6c8e08632c1ae630a5201547999a8288f0c04c8dd6cee58"
	I1212 19:32:00.973989   22918 cri.go:89] found id: "22824f98cf9f933f80e85a004433640955a438da23b09eee368ae6a14f2c12f2"
	I1212 19:32:00.973993   22918 cri.go:89] found id: "624cd53ac7dff424e3ce412fbf38f28bdb8f59a04c4a399ca5d5f0b1ec4ee746"
	I1212 19:32:00.973997   22918 cri.go:89] found id: "6fb90e1241345ef6183ebbe960206cf20dfe7d2b12dc3ca243063ca515dd58b1"
	I1212 19:32:00.974002   22918 cri.go:89] found id: ""
	I1212 19:32:00.974047   22918 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 19:32:00.987726   22918 out.go:203] 
	W1212 19:32:00.989090   22918 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T19:32:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T19:32:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 19:32:00.989113   22918 out.go:285] * 
	* 
	W1212 19:32:00.992229   22918 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 19:32:00.993466   22918 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-410014 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (38.68s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-410014 --alsologtostderr -v=1
addons_test.go:810: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-410014 --alsologtostderr -v=1: exit status 11 (245.125062ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 19:31:19.955394   19386 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:31:19.955511   19386 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:31:19.955520   19386 out.go:374] Setting ErrFile to fd 2...
	I1212 19:31:19.955525   19386 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:31:19.955715   19386 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 19:31:19.955954   19386 mustload.go:66] Loading cluster: addons-410014
	I1212 19:31:19.956235   19386 config.go:182] Loaded profile config "addons-410014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:31:19.956254   19386 addons.go:622] checking whether the cluster is paused
	I1212 19:31:19.956358   19386 config.go:182] Loaded profile config "addons-410014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:31:19.956372   19386 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:31:19.956754   19386 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:31:19.976767   19386 ssh_runner.go:195] Run: systemctl --version
	I1212 19:31:19.976827   19386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:31:20.000867   19386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:31:20.097439   19386 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 19:31:20.097520   19386 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 19:31:20.124354   19386 cri.go:89] found id: "76571c6136b47298f32a0b1ec18cc99af7b389e2b5dbe495f1629d5e2703e8de"
	I1212 19:31:20.124370   19386 cri.go:89] found id: "7f1863e417224e030ce9a9d2e2f7136cbd19e03c826aa5403bad131c91dc9b93"
	I1212 19:31:20.124374   19386 cri.go:89] found id: "5cd7aec5d9bbe817f190772e730768355d87901e7fc80e8a00143cfda71f1d24"
	I1212 19:31:20.124378   19386 cri.go:89] found id: "9d3792f63458445175e10a14dca33e6972f1836499b2b34aec58251c1aae25fd"
	I1212 19:31:20.124381   19386 cri.go:89] found id: "e9263571afd91c7084801115b897309fd4e916d9659237639dc9fac1f242fef2"
	I1212 19:31:20.124384   19386 cri.go:89] found id: "cb6005d68d9a96016741ff117661915a95e429ed8aef7fc5832c7b9335aaf7c7"
	I1212 19:31:20.124387   19386 cri.go:89] found id: "24cd917601d91959bf6581b8065f18f069164f1f6db8c6dada400521a951cfd4"
	I1212 19:31:20.124390   19386 cri.go:89] found id: "3693e2f08cab47eb8561d548d6df8eed556b0c840ae77214117a4018d1ef6385"
	I1212 19:31:20.124392   19386 cri.go:89] found id: "d28f0bff28d4accb708ad0b723f3d8b457fae7a4905c32e5e61779549873d5fc"
	I1212 19:31:20.124397   19386 cri.go:89] found id: "355df816d72de65cb71104e428e2341e612b893f1f784bd69a95e8b4d25b2f15"
	I1212 19:31:20.124402   19386 cri.go:89] found id: "5378136ec2be93b30b5623db6080933a844f5c1ee7ab34812921d4ca12a2ec39"
	I1212 19:31:20.124416   19386 cri.go:89] found id: "448e73e6cab5275217215830ac10a0c4029a9e407db13c1c6352947a6c77b9f5"
	I1212 19:31:20.124425   19386 cri.go:89] found id: "f6b724bf055e80101eea67abe6cf1548be806ad8088ad9e482367bfb38836269"
	I1212 19:31:20.124430   19386 cri.go:89] found id: "d5470be0baf62349e9e352868eba4d0011d02e21667dbb5d805f2985be00ccdc"
	I1212 19:31:20.124438   19386 cri.go:89] found id: "54eea5a21a6fd70fe1e9d45f9a1a45437a77811fe92301178f0b99e163b4c669"
	I1212 19:31:20.124453   19386 cri.go:89] found id: "203522604a8b916bc3a950e033ca9bb281723c416d6c4583c96dbe77e514160d"
	I1212 19:31:20.124463   19386 cri.go:89] found id: "31bb87c8f5b440ef6f409980279d69a279ee8873f638c558c1455812526d42f1"
	I1212 19:31:20.124468   19386 cri.go:89] found id: "30de3e37db155bdf4e8d95d19b3266cd29f007ba5c66fdf65486edcc3d9fd8f3"
	I1212 19:31:20.124472   19386 cri.go:89] found id: "57cc761c4f0a4a4897b6a36f7a51c07f8f9b9d7923145c3005d02d41381917ca"
	I1212 19:31:20.124477   19386 cri.go:89] found id: "dea3cfc0d651ab9de775ab172dfcbe74be7ff2c5ac3aa39aa3a32c2ca09c508a"
	I1212 19:31:20.124484   19386 cri.go:89] found id: "d28712ec6c40914cb6c8e08632c1ae630a5201547999a8288f0c04c8dd6cee58"
	I1212 19:31:20.124490   19386 cri.go:89] found id: "22824f98cf9f933f80e85a004433640955a438da23b09eee368ae6a14f2c12f2"
	I1212 19:31:20.124493   19386 cri.go:89] found id: "624cd53ac7dff424e3ce412fbf38f28bdb8f59a04c4a399ca5d5f0b1ec4ee746"
	I1212 19:31:20.124501   19386 cri.go:89] found id: "6fb90e1241345ef6183ebbe960206cf20dfe7d2b12dc3ca243063ca515dd58b1"
	I1212 19:31:20.124505   19386 cri.go:89] found id: ""
	I1212 19:31:20.124554   19386 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 19:31:20.137196   19386 out.go:203] 
	W1212 19:31:20.138369   19386 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T19:31:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T19:31:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 19:31:20.138385   19386 out.go:285] * 
	* 
	W1212 19:31:20.141213   19386 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 19:31:20.142260   19386 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:812: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-410014 --alsologtostderr -v=1": exit status 11
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-410014
helpers_test.go:244: (dbg) docker inspect addons-410014:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4a5536dc1575ed725317c56657f52a23cd45c97986b0c586e47505c63e2b1fd1",
	        "Created": "2025-12-12T19:29:01.077342227Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 11682,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T19:29:01.116785757Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ca69fb46d667873e297ff8975852d6be818eb624529e750e2b09ff0d3b0c367",
	        "ResolvConfPath": "/var/lib/docker/containers/4a5536dc1575ed725317c56657f52a23cd45c97986b0c586e47505c63e2b1fd1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4a5536dc1575ed725317c56657f52a23cd45c97986b0c586e47505c63e2b1fd1/hostname",
	        "HostsPath": "/var/lib/docker/containers/4a5536dc1575ed725317c56657f52a23cd45c97986b0c586e47505c63e2b1fd1/hosts",
	        "LogPath": "/var/lib/docker/containers/4a5536dc1575ed725317c56657f52a23cd45c97986b0c586e47505c63e2b1fd1/4a5536dc1575ed725317c56657f52a23cd45c97986b0c586e47505c63e2b1fd1-json.log",
	        "Name": "/addons-410014",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-410014:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-410014",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4a5536dc1575ed725317c56657f52a23cd45c97986b0c586e47505c63e2b1fd1",
	                "LowerDir": "/var/lib/docker/overlay2/e50b55a8266603824a6dd9a1cf4b6d2a694442c49034d88d55fbde0ec52bf8f9-init/diff:/var/lib/docker/overlay2/87d8f1d6be832394c20ec55eb084b3f4a084ca786856584bd9e90cdb45432d3c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e50b55a8266603824a6dd9a1cf4b6d2a694442c49034d88d55fbde0ec52bf8f9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e50b55a8266603824a6dd9a1cf4b6d2a694442c49034d88d55fbde0ec52bf8f9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e50b55a8266603824a6dd9a1cf4b6d2a694442c49034d88d55fbde0ec52bf8f9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-410014",
	                "Source": "/var/lib/docker/volumes/addons-410014/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-410014",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-410014",
	                "name.minikube.sigs.k8s.io": "addons-410014",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "81995d9c28c8d1f7f8986d14bf40fa0588f8033c648b03b6ed26d2c9cf70e2e0",
	            "SandboxKey": "/var/run/docker/netns/81995d9c28c8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-410014": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "adb88a589ecdd26a7a3a0a28470b93010384464bc8b7cf07d4fddcf94860e84f",
	                    "EndpointID": "88b50d26f71269edfeee1b207e9038fd84bf601c1dee180480b698d623af9f8f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "0a:c8:45:73:a5:91",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-410014",
	                        "4a5536dc1575"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-410014 -n addons-410014
helpers_test.go:253: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-410014 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-410014 logs -n 25: (1.042337533s)
helpers_test.go:261: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-122070 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-122070   │ jenkins │ v1.37.0 │ 12 Dec 25 19:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 12 Dec 25 19:28 UTC │ 12 Dec 25 19:28 UTC │
	│ delete  │ -p download-only-122070                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-122070   │ jenkins │ v1.37.0 │ 12 Dec 25 19:28 UTC │ 12 Dec 25 19:28 UTC │
	│ start   │ -o=json --download-only -p download-only-990185 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-990185   │ jenkins │ v1.37.0 │ 12 Dec 25 19:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 12 Dec 25 19:28 UTC │ 12 Dec 25 19:28 UTC │
	│ delete  │ -p download-only-990185                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-990185   │ jenkins │ v1.37.0 │ 12 Dec 25 19:28 UTC │ 12 Dec 25 19:28 UTC │
	│ start   │ -o=json --download-only -p download-only-573235 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                         │ download-only-573235   │ jenkins │ v1.37.0 │ 12 Dec 25 19:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 12 Dec 25 19:28 UTC │ 12 Dec 25 19:28 UTC │
	│ delete  │ -p download-only-573235                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-573235   │ jenkins │ v1.37.0 │ 12 Dec 25 19:28 UTC │ 12 Dec 25 19:28 UTC │
	│ delete  │ -p download-only-122070                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-122070   │ jenkins │ v1.37.0 │ 12 Dec 25 19:28 UTC │ 12 Dec 25 19:28 UTC │
	│ delete  │ -p download-only-990185                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-990185   │ jenkins │ v1.37.0 │ 12 Dec 25 19:28 UTC │ 12 Dec 25 19:28 UTC │
	│ delete  │ -p download-only-573235                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-573235   │ jenkins │ v1.37.0 │ 12 Dec 25 19:28 UTC │ 12 Dec 25 19:28 UTC │
	│ start   │ --download-only -p download-docker-465015 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-465015 │ jenkins │ v1.37.0 │ 12 Dec 25 19:28 UTC │                     │
	│ delete  │ -p download-docker-465015                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-465015 │ jenkins │ v1.37.0 │ 12 Dec 25 19:28 UTC │ 12 Dec 25 19:28 UTC │
	│ start   │ --download-only -p binary-mirror-608278 --alsologtostderr --binary-mirror http://127.0.0.1:36999 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-608278   │ jenkins │ v1.37.0 │ 12 Dec 25 19:28 UTC │                     │
	│ delete  │ -p binary-mirror-608278                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-608278   │ jenkins │ v1.37.0 │ 12 Dec 25 19:28 UTC │ 12 Dec 25 19:28 UTC │
	│ addons  │ enable dashboard -p addons-410014                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-410014          │ jenkins │ v1.37.0 │ 12 Dec 25 19:28 UTC │                     │
	│ addons  │ disable dashboard -p addons-410014                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-410014          │ jenkins │ v1.37.0 │ 12 Dec 25 19:28 UTC │                     │
	│ start   │ -p addons-410014 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-410014          │ jenkins │ v1.37.0 │ 12 Dec 25 19:28 UTC │ 12 Dec 25 19:31 UTC │
	│ addons  │ addons-410014 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-410014          │ jenkins │ v1.37.0 │ 12 Dec 25 19:31 UTC │                     │
	│ addons  │ addons-410014 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-410014          │ jenkins │ v1.37.0 │ 12 Dec 25 19:31 UTC │                     │
	│ addons  │ enable headlamp -p addons-410014 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-410014          │ jenkins │ v1.37.0 │ 12 Dec 25 19:31 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 19:28:38
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 19:28:38.222893   11017 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:28:38.222974   11017 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:28:38.222978   11017 out.go:374] Setting ErrFile to fd 2...
	I1212 19:28:38.222982   11017 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:28:38.223152   11017 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 19:28:38.223641   11017 out.go:368] Setting JSON to false
	I1212 19:28:38.224394   11017 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":665,"bootTime":1765567053,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 19:28:38.224439   11017 start.go:143] virtualization: kvm guest
	I1212 19:28:38.226152   11017 out.go:179] * [addons-410014] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 19:28:38.227243   11017 notify.go:221] Checking for updates...
	I1212 19:28:38.227268   11017 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 19:28:38.228322   11017 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 19:28:38.229396   11017 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 19:28:38.230412   11017 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-5703/.minikube
	I1212 19:28:38.231423   11017 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 19:28:38.232356   11017 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 19:28:38.233477   11017 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 19:28:38.254216   11017 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1212 19:28:38.254362   11017 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:28:38.304136   11017 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-12 19:28:38.295507384 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 19:28:38.304230   11017 docker.go:319] overlay module found
	I1212 19:28:38.305647   11017 out.go:179] * Using the docker driver based on user configuration
	I1212 19:28:38.306799   11017 start.go:309] selected driver: docker
	I1212 19:28:38.306810   11017 start.go:927] validating driver "docker" against <nil>
	I1212 19:28:38.306820   11017 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 19:28:38.307336   11017 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:28:38.356845   11017 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-12 19:28:38.347930449 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 19:28:38.356975   11017 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 19:28:38.357193   11017 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 19:28:38.358539   11017 out.go:179] * Using Docker driver with root privileges
	I1212 19:28:38.359624   11017 cni.go:84] Creating CNI manager for ""
	I1212 19:28:38.359679   11017 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 19:28:38.359689   11017 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 19:28:38.359758   11017 start.go:353] cluster config:
	{Name:addons-410014 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-410014 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1212 19:28:38.360853   11017 out.go:179] * Starting "addons-410014" primary control-plane node in "addons-410014" cluster
	I1212 19:28:38.361900   11017 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 19:28:38.362829   11017 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 19:28:38.363835   11017 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 19:28:38.363859   11017 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1212 19:28:38.363867   11017 cache.go:65] Caching tarball of preloaded images
	I1212 19:28:38.363869   11017 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 19:28:38.363968   11017 preload.go:238] Found /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 19:28:38.363981   11017 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 19:28:38.364318   11017 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/config.json ...
	I1212 19:28:38.364343   11017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/config.json: {Name:mk5485d62eb36051e12a4afe212d8d5f2a720327 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:28:38.380972   11017 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 to local cache
	I1212 19:28:38.381084   11017 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local cache directory
	I1212 19:28:38.381100   11017 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local cache directory, skipping pull
	I1212 19:28:38.381104   11017 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in cache, skipping pull
	I1212 19:28:38.381114   11017 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 as a tarball
	I1212 19:28:38.381121   11017 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 from local cache
	I1212 19:28:50.846232   11017 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 from cached tarball
	I1212 19:28:50.846266   11017 cache.go:243] Successfully downloaded all kic artifacts
	I1212 19:28:50.846322   11017 start.go:360] acquireMachinesLock for addons-410014: {Name:mka5adb08d7923b35d736bb0962856278eccf142 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 19:28:50.846412   11017 start.go:364] duration metric: took 69.374µs to acquireMachinesLock for "addons-410014"
	I1212 19:28:50.846445   11017 start.go:93] Provisioning new machine with config: &{Name:addons-410014 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-410014 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 19:28:50.846500   11017 start.go:125] createHost starting for "" (driver="docker")
	I1212 19:28:50.848056   11017 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1212 19:28:50.848291   11017 start.go:159] libmachine.API.Create for "addons-410014" (driver="docker")
	I1212 19:28:50.848326   11017 client.go:173] LocalClient.Create starting
	I1212 19:28:50.848453   11017 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem
	I1212 19:28:51.080759   11017 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem
	I1212 19:28:51.149227   11017 cli_runner.go:164] Run: docker network inspect addons-410014 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 19:28:51.166935   11017 cli_runner.go:211] docker network inspect addons-410014 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 19:28:51.167003   11017 network_create.go:284] running [docker network inspect addons-410014] to gather additional debugging logs...
	I1212 19:28:51.167022   11017 cli_runner.go:164] Run: docker network inspect addons-410014
	W1212 19:28:51.181871   11017 cli_runner.go:211] docker network inspect addons-410014 returned with exit code 1
	I1212 19:28:51.181903   11017 network_create.go:287] error running [docker network inspect addons-410014]: docker network inspect addons-410014: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-410014 not found
	I1212 19:28:51.181920   11017 network_create.go:289] output of [docker network inspect addons-410014]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-410014 not found
	
	** /stderr **
	I1212 19:28:51.182052   11017 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 19:28:51.197848   11017 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001dc4170}
	I1212 19:28:51.197883   11017 network_create.go:124] attempt to create docker network addons-410014 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1212 19:28:51.197945   11017 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-410014 addons-410014
	I1212 19:28:51.243691   11017 network_create.go:108] docker network addons-410014 192.168.49.0/24 created
	I1212 19:28:51.243719   11017 kic.go:121] calculated static IP "192.168.49.2" for the "addons-410014" container
	I1212 19:28:51.243767   11017 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 19:28:51.257741   11017 cli_runner.go:164] Run: docker volume create addons-410014 --label name.minikube.sigs.k8s.io=addons-410014 --label created_by.minikube.sigs.k8s.io=true
	I1212 19:28:51.273433   11017 oci.go:103] Successfully created a docker volume addons-410014
	I1212 19:28:51.273491   11017 cli_runner.go:164] Run: docker run --rm --name addons-410014-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-410014 --entrypoint /usr/bin/test -v addons-410014:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -d /var/lib
	I1212 19:28:57.316601   11017 cli_runner.go:217] Completed: docker run --rm --name addons-410014-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-410014 --entrypoint /usr/bin/test -v addons-410014:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -d /var/lib: (6.043060061s)
	I1212 19:28:57.316638   11017 oci.go:107] Successfully prepared a docker volume addons-410014
	I1212 19:28:57.316700   11017 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 19:28:57.316712   11017 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 19:28:57.316756   11017 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-410014:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 19:29:01.011891   11017 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-410014:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -I lz4 -xf /preloaded.tar -C /extractDir: (3.695090167s)
	I1212 19:29:01.011924   11017 kic.go:203] duration metric: took 3.695207303s to extract preloaded images to volume ...
	W1212 19:29:01.012033   11017 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1212 19:29:01.012079   11017 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1212 19:29:01.012129   11017 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 19:29:01.062395   11017 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-410014 --name addons-410014 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-410014 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-410014 --network addons-410014 --ip 192.168.49.2 --volume addons-410014:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138
	I1212 19:29:01.342237   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Running}}
	I1212 19:29:01.360338   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:29:01.377559   11017 cli_runner.go:164] Run: docker exec addons-410014 stat /var/lib/dpkg/alternatives/iptables
	I1212 19:29:01.422474   11017 oci.go:144] the created container "addons-410014" has a running status.
	I1212 19:29:01.422503   11017 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa...
	I1212 19:29:01.587621   11017 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 19:29:01.613825   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:29:01.632734   11017 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 19:29:01.632759   11017 kic_runner.go:114] Args: [docker exec --privileged addons-410014 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 19:29:01.689906   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:29:01.710446   11017 machine.go:94] provisionDockerMachine start ...
	I1212 19:29:01.710515   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:01.730539   11017 main.go:143] libmachine: Using SSH client type: native
	I1212 19:29:01.730751   11017 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1212 19:29:01.730763   11017 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 19:29:01.859687   11017 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-410014
	
	I1212 19:29:01.859726   11017 ubuntu.go:182] provisioning hostname "addons-410014"
	I1212 19:29:01.859800   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:01.877739   11017 main.go:143] libmachine: Using SSH client type: native
	I1212 19:29:01.877943   11017 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1212 19:29:01.877956   11017 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-410014 && echo "addons-410014" | sudo tee /etc/hostname
	I1212 19:29:02.015581   11017 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-410014
	
	I1212 19:29:02.015667   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:02.033975   11017 main.go:143] libmachine: Using SSH client type: native
	I1212 19:29:02.034202   11017 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1212 19:29:02.034226   11017 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-410014' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-410014/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-410014' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 19:29:02.159981   11017 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 19:29:02.160013   11017 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-5703/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-5703/.minikube}
	I1212 19:29:02.160032   11017 ubuntu.go:190] setting up certificates
	I1212 19:29:02.160040   11017 provision.go:84] configureAuth start
	I1212 19:29:02.160088   11017 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-410014
	I1212 19:29:02.176068   11017 provision.go:143] copyHostCerts
	I1212 19:29:02.176125   11017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-5703/.minikube/cert.pem (1123 bytes)
	I1212 19:29:02.176240   11017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-5703/.minikube/key.pem (1679 bytes)
	I1212 19:29:02.176328   11017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-5703/.minikube/ca.pem (1078 bytes)
	I1212 19:29:02.176385   11017 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-5703/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca-key.pem org=jenkins.addons-410014 san=[127.0.0.1 192.168.49.2 addons-410014 localhost minikube]
	I1212 19:29:02.222492   11017 provision.go:177] copyRemoteCerts
	I1212 19:29:02.222533   11017 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 19:29:02.222568   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:02.238789   11017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:29:02.330183   11017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1212 19:29:02.347394   11017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 19:29:02.362996   11017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 19:29:02.378425   11017 provision.go:87] duration metric: took 218.375986ms to configureAuth
	I1212 19:29:02.378445   11017 ubuntu.go:206] setting minikube options for container-runtime
	I1212 19:29:02.378587   11017 config.go:182] Loaded profile config "addons-410014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:29:02.378681   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:02.395267   11017 main.go:143] libmachine: Using SSH client type: native
	I1212 19:29:02.395471   11017 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1212 19:29:02.395489   11017 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 19:29:02.653198   11017 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 19:29:02.653224   11017 machine.go:97] duration metric: took 942.75767ms to provisionDockerMachine
	I1212 19:29:02.653237   11017 client.go:176] duration metric: took 11.804899719s to LocalClient.Create
	I1212 19:29:02.653253   11017 start.go:167] duration metric: took 11.804965624s to libmachine.API.Create "addons-410014"
	I1212 19:29:02.653260   11017 start.go:293] postStartSetup for "addons-410014" (driver="docker")
	I1212 19:29:02.653268   11017 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 19:29:02.653344   11017 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 19:29:02.653378   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:02.670250   11017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:29:02.763629   11017 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 19:29:02.766703   11017 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 19:29:02.766730   11017 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 19:29:02.766740   11017 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-5703/.minikube/addons for local assets ...
	I1212 19:29:02.766792   11017 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-5703/.minikube/files for local assets ...
	I1212 19:29:02.766815   11017 start.go:296] duration metric: took 113.550076ms for postStartSetup
	I1212 19:29:02.767070   11017 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-410014
	I1212 19:29:02.783623   11017 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/config.json ...
	I1212 19:29:02.783857   11017 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 19:29:02.783901   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:02.799441   11017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:29:02.888492   11017 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 19:29:02.892397   11017 start.go:128] duration metric: took 12.045882809s to createHost
	I1212 19:29:02.892415   11017 start.go:83] releasing machines lock for "addons-410014", held for 12.045992155s
	I1212 19:29:02.892498   11017 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-410014
	I1212 19:29:02.908852   11017 ssh_runner.go:195] Run: cat /version.json
	I1212 19:29:02.908890   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:02.908924   11017 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 19:29:02.908997   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:02.925550   11017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:29:02.926551   11017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:29:03.069675   11017 ssh_runner.go:195] Run: systemctl --version
	I1212 19:29:03.075345   11017 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 19:29:03.106914   11017 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 19:29:03.110969   11017 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 19:29:03.111017   11017 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 19:29:03.133575   11017 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 19:29:03.133595   11017 start.go:496] detecting cgroup driver to use...
	I1212 19:29:03.133622   11017 detect.go:190] detected "systemd" cgroup driver on host os
	I1212 19:29:03.133653   11017 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 19:29:03.147816   11017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 19:29:03.158439   11017 docker.go:218] disabling cri-docker service (if available) ...
	I1212 19:29:03.158475   11017 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 19:29:03.173117   11017 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 19:29:03.188357   11017 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 19:29:03.264812   11017 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 19:29:03.346147   11017 docker.go:234] disabling docker service ...
	I1212 19:29:03.346217   11017 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 19:29:03.362651   11017 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 19:29:03.373576   11017 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 19:29:03.449061   11017 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 19:29:03.525751   11017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 19:29:03.536975   11017 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 19:29:03.549437   11017 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 19:29:03.549488   11017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 19:29:03.558487   11017 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1212 19:29:03.558538   11017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 19:29:03.566209   11017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 19:29:03.573701   11017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 19:29:03.581134   11017 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 19:29:03.588114   11017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 19:29:03.595709   11017 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 19:29:03.607741   11017 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 19:29:03.615386   11017 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 19:29:03.621800   11017 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 19:29:03.621858   11017 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 19:29:03.632586   11017 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 19:29:03.639058   11017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:29:03.714883   11017 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 19:29:03.837028   11017 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 19:29:03.837091   11017 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 19:29:03.840671   11017 start.go:564] Will wait 60s for crictl version
	I1212 19:29:03.840721   11017 ssh_runner.go:195] Run: which crictl
	I1212 19:29:03.843944   11017 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 19:29:03.865880   11017 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 19:29:03.865969   11017 ssh_runner.go:195] Run: crio --version
	I1212 19:29:03.890965   11017 ssh_runner.go:195] Run: crio --version
	I1212 19:29:03.917166   11017 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 19:29:03.918244   11017 cli_runner.go:164] Run: docker network inspect addons-410014 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 19:29:03.934208   11017 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 19:29:03.937739   11017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 19:29:03.947012   11017 kubeadm.go:884] updating cluster {Name:addons-410014 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-410014 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 19:29:03.947116   11017 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 19:29:03.947165   11017 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 19:29:03.975793   11017 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 19:29:03.975809   11017 crio.go:433] Images already preloaded, skipping extraction
	I1212 19:29:03.975843   11017 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 19:29:03.997820   11017 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 19:29:03.997838   11017 cache_images.go:86] Images are preloaded, skipping loading
	I1212 19:29:03.997845   11017 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1212 19:29:03.997925   11017 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-410014 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-410014 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 19:29:03.997983   11017 ssh_runner.go:195] Run: crio config
	I1212 19:29:04.040728   11017 cni.go:84] Creating CNI manager for ""
	I1212 19:29:04.040748   11017 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 19:29:04.040765   11017 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 19:29:04.040784   11017 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-410014 NodeName:addons-410014 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 19:29:04.040882   11017 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-410014"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 19:29:04.040937   11017 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 19:29:04.048214   11017 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 19:29:04.048255   11017 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 19:29:04.055251   11017 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1212 19:29:04.066552   11017 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 19:29:04.079977   11017 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1212 19:29:04.090986   11017 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 19:29:04.094051   11017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 19:29:04.102741   11017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:29:04.178644   11017 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 19:29:04.200109   11017 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014 for IP: 192.168.49.2
	I1212 19:29:04.200129   11017 certs.go:195] generating shared ca certs ...
	I1212 19:29:04.200151   11017 certs.go:227] acquiring lock for ca certs: {Name:mk2b2b547b64ae18c808f2dedd3d3cb8fa4a59ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:29:04.200264   11017 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-5703/.minikube/ca.key
	I1212 19:29:04.300233   11017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-5703/.minikube/ca.crt ...
	I1212 19:29:04.300257   11017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/ca.crt: {Name:mk811712a324d18afa5f7a10469f88bc4b90d914 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:29:04.300436   11017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-5703/.minikube/ca.key ...
	I1212 19:29:04.300452   11017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/ca.key: {Name:mk97a8f04d69b14c722e80dd1116f301709afb08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:29:04.300557   11017 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.key
	I1212 19:29:04.339804   11017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.crt ...
	I1212 19:29:04.339822   11017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.crt: {Name:mkf7a019fbaaaa81eec129dd4b7b743eec9e9e91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:29:04.339958   11017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.key ...
	I1212 19:29:04.339970   11017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.key: {Name:mk9371d9666838d118eac78114fa34de285870e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:29:04.340061   11017 certs.go:257] generating profile certs ...
	I1212 19:29:04.340112   11017 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/client.key
	I1212 19:29:04.340125   11017 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/client.crt with IP's: []
	I1212 19:29:04.523303   11017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/client.crt ...
	I1212 19:29:04.523323   11017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/client.crt: {Name:mkbe12ab5afb981d7a65696fbfae2b599f08d7cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:29:04.523472   11017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/client.key ...
	I1212 19:29:04.523485   11017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/client.key: {Name:mkda5c082a9613c615d115541247dd4c7901992d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:29:04.523578   11017 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/apiserver.key.4c29363d
	I1212 19:29:04.523602   11017 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/apiserver.crt.4c29363d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1212 19:29:04.609307   11017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/apiserver.crt.4c29363d ...
	I1212 19:29:04.609325   11017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/apiserver.crt.4c29363d: {Name:mk2db75e8d4509f0173300ca92c7ac1b67562c26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:29:04.609460   11017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/apiserver.key.4c29363d ...
	I1212 19:29:04.609475   11017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/apiserver.key.4c29363d: {Name:mk23ffbaeeebc87c7c135375b55ae863856538f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:29:04.609569   11017 certs.go:382] copying /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/apiserver.crt.4c29363d -> /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/apiserver.crt
	I1212 19:29:04.609659   11017 certs.go:386] copying /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/apiserver.key.4c29363d -> /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/apiserver.key
	I1212 19:29:04.609712   11017 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/proxy-client.key
	I1212 19:29:04.609729   11017 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/proxy-client.crt with IP's: []
	I1212 19:29:04.694811   11017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/proxy-client.crt ...
	I1212 19:29:04.694827   11017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/proxy-client.crt: {Name:mkfe272bb22fc96b67cdbcf6423083ea3ed13521 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:29:04.694967   11017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/proxy-client.key ...
	I1212 19:29:04.694979   11017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/proxy-client.key: {Name:mk5f850397c70ca2dd135637b7b928ad321718df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:29:04.695161   11017 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 19:29:04.695194   11017 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem (1078 bytes)
	I1212 19:29:04.695219   11017 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem (1123 bytes)
	I1212 19:29:04.695242   11017 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/key.pem (1679 bytes)
	I1212 19:29:04.695762   11017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 19:29:04.712720   11017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 19:29:04.728337   11017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 19:29:04.743633   11017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 19:29:04.758998   11017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1212 19:29:04.774347   11017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 19:29:04.789614   11017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 19:29:04.804782   11017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 19:29:04.820034   11017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 19:29:04.837183   11017 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 19:29:04.848161   11017 ssh_runner.go:195] Run: openssl version
	I1212 19:29:04.853618   11017 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:29:04.860039   11017 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 19:29:04.868755   11017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:29:04.871955   11017 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:29:04.871994   11017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:29:04.905601   11017 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 19:29:04.912003   11017 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 19:29:04.918524   11017 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 19:29:04.921579   11017 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 19:29:04.921628   11017 kubeadm.go:401] StartCluster: {Name:addons-410014 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-410014 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:29:04.921705   11017 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 19:29:04.921767   11017 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 19:29:04.945729   11017 cri.go:89] found id: ""
	I1212 19:29:04.945781   11017 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 19:29:04.952598   11017 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 19:29:04.959503   11017 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 19:29:04.959535   11017 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 19:29:04.966550   11017 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 19:29:04.966565   11017 kubeadm.go:158] found existing configuration files:
	
	I1212 19:29:04.966594   11017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 19:29:04.973250   11017 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 19:29:04.973307   11017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 19:29:04.979771   11017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 19:29:04.986319   11017 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 19:29:04.986350   11017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 19:29:04.992602   11017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 19:29:04.999190   11017 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 19:29:04.999226   11017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 19:29:05.005582   11017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 19:29:05.012408   11017 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 19:29:05.012453   11017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 19:29:05.018913   11017 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 19:29:05.053204   11017 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1212 19:29:05.053326   11017 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 19:29:05.070775   11017 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 19:29:05.070837   11017 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1212 19:29:05.070878   11017 kubeadm.go:319] OS: Linux
	I1212 19:29:05.070925   11017 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 19:29:05.070983   11017 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 19:29:05.071080   11017 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 19:29:05.071168   11017 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 19:29:05.071247   11017 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 19:29:05.071328   11017 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 19:29:05.071407   11017 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 19:29:05.071472   11017 kubeadm.go:319] CGROUPS_IO: enabled
	I1212 19:29:05.121614   11017 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 19:29:05.121786   11017 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 19:29:05.121945   11017 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 19:29:05.128736   11017 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 19:29:05.130615   11017 out.go:252]   - Generating certificates and keys ...
	I1212 19:29:05.130708   11017 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 19:29:05.130806   11017 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 19:29:05.476018   11017 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 19:29:05.637824   11017 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 19:29:05.879930   11017 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 19:29:06.055138   11017 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 19:29:06.192776   11017 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 19:29:06.192916   11017 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-410014 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1212 19:29:06.458749   11017 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 19:29:06.458933   11017 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-410014 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1212 19:29:06.593624   11017 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 19:29:06.723088   11017 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 19:29:06.786106   11017 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 19:29:06.786209   11017 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 19:29:06.829336   11017 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 19:29:06.968266   11017 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 19:29:07.216802   11017 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 19:29:07.520766   11017 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 19:29:07.842883   11017 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 19:29:07.843361   11017 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 19:29:07.847080   11017 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 19:29:07.850392   11017 out.go:252]   - Booting up control plane ...
	I1212 19:29:07.850489   11017 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 19:29:07.850583   11017 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 19:29:07.850678   11017 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 19:29:07.862648   11017 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 19:29:07.862790   11017 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 19:29:07.870561   11017 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 19:29:07.870827   11017 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 19:29:07.870898   11017 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 19:29:07.964916   11017 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 19:29:07.965099   11017 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 19:29:08.466368   11017 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.601603ms
	I1212 19:29:08.469003   11017 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1212 19:29:08.469113   11017 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1212 19:29:08.469251   11017 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1212 19:29:08.469383   11017 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1212 19:29:09.481677   11017 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.012542756s
	I1212 19:29:10.559534   11017 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.090371298s
	I1212 19:29:11.970324   11017 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501193265s
	I1212 19:29:11.984046   11017 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 19:29:11.992393   11017 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 19:29:12.000599   11017 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 19:29:12.000880   11017 kubeadm.go:319] [mark-control-plane] Marking the node addons-410014 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 19:29:12.008087   11017 kubeadm.go:319] [bootstrap-token] Using token: b6z8qq.wplclg88br34tsuo
	I1212 19:29:12.009442   11017 out.go:252]   - Configuring RBAC rules ...
	I1212 19:29:12.009586   11017 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 19:29:12.015302   11017 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 19:29:12.019739   11017 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 19:29:12.021925   11017 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 19:29:12.024024   11017 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 19:29:12.027063   11017 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 19:29:12.375859   11017 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 19:29:12.798671   11017 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1212 19:29:13.375753   11017 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1212 19:29:13.376423   11017 kubeadm.go:319] 
	I1212 19:29:13.376514   11017 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1212 19:29:13.376525   11017 kubeadm.go:319] 
	I1212 19:29:13.376643   11017 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1212 19:29:13.376656   11017 kubeadm.go:319] 
	I1212 19:29:13.376676   11017 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1212 19:29:13.376767   11017 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 19:29:13.376815   11017 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 19:29:13.376824   11017 kubeadm.go:319] 
	I1212 19:29:13.376905   11017 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1212 19:29:13.376917   11017 kubeadm.go:319] 
	I1212 19:29:13.376973   11017 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 19:29:13.376979   11017 kubeadm.go:319] 
	I1212 19:29:13.377022   11017 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1212 19:29:13.377142   11017 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 19:29:13.377249   11017 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 19:29:13.377264   11017 kubeadm.go:319] 
	I1212 19:29:13.377399   11017 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 19:29:13.377503   11017 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1212 19:29:13.377518   11017 kubeadm.go:319] 
	I1212 19:29:13.377583   11017 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token b6z8qq.wplclg88br34tsuo \
	I1212 19:29:13.377720   11017 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c49fb95e06ac971f3e9f3fbce39707c55b5d19bac31f7f0c0750449d5e9f38c \
	I1212 19:29:13.377749   11017 kubeadm.go:319] 	--control-plane 
	I1212 19:29:13.377759   11017 kubeadm.go:319] 
	I1212 19:29:13.377885   11017 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1212 19:29:13.377901   11017 kubeadm.go:319] 
	I1212 19:29:13.377989   11017 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token b6z8qq.wplclg88br34tsuo \
	I1212 19:29:13.378116   11017 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c49fb95e06ac971f3e9f3fbce39707c55b5d19bac31f7f0c0750449d5e9f38c 
	I1212 19:29:13.379784   11017 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1212 19:29:13.379893   11017 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 19:29:13.379918   11017 cni.go:84] Creating CNI manager for ""
	I1212 19:29:13.379925   11017 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 19:29:13.381238   11017 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1212 19:29:13.382239   11017 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 19:29:13.386134   11017 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1212 19:29:13.386151   11017 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1212 19:29:13.398586   11017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 19:29:13.583248   11017 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 19:29:13.583373   11017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:29:13.583373   11017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-410014 minikube.k8s.io/updated_at=2025_12_12T19_29_13_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300 minikube.k8s.io/name=addons-410014 minikube.k8s.io/primary=true
	I1212 19:29:13.592892   11017 ops.go:34] apiserver oom_adj: -16
	I1212 19:29:13.659854   11017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:29:14.160783   11017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:29:14.660045   11017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:29:15.160201   11017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:29:15.660628   11017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:29:16.160088   11017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:29:16.660702   11017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:29:17.159953   11017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:29:17.660406   11017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:29:18.160829   11017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:29:18.660729   11017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:29:18.725227   11017 kubeadm.go:1114] duration metric: took 5.141905239s to wait for elevateKubeSystemPrivileges
	I1212 19:29:18.725282   11017 kubeadm.go:403] duration metric: took 13.803646642s to StartCluster
	I1212 19:29:18.725306   11017 settings.go:142] acquiring lock: {Name:mk7b47e673a4fe47e1d03b804f21c4b8c19bdcb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:29:18.725411   11017 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 19:29:18.725853   11017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/kubeconfig: {Name:mkf945a9079d724e4b1deff5cd122f8b2719dc1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:29:18.726053   11017 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 19:29:18.726077   11017 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 19:29:18.726132   11017 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1212 19:29:18.726256   11017 addons.go:70] Setting yakd=true in profile "addons-410014"
	I1212 19:29:18.726303   11017 addons.go:239] Setting addon yakd=true in "addons-410014"
	I1212 19:29:18.726313   11017 config.go:182] Loaded profile config "addons-410014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:29:18.726333   11017 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:29:18.726337   11017 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-410014"
	I1212 19:29:18.726316   11017 addons.go:70] Setting inspektor-gadget=true in profile "addons-410014"
	I1212 19:29:18.726361   11017 addons.go:70] Setting cloud-spanner=true in profile "addons-410014"
	I1212 19:29:18.726367   11017 addons.go:239] Setting addon inspektor-gadget=true in "addons-410014"
	I1212 19:29:18.726371   11017 addons.go:239] Setting addon cloud-spanner=true in "addons-410014"
	I1212 19:29:18.726393   11017 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:29:18.726396   11017 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:29:18.726625   11017 addons.go:70] Setting registry-creds=true in profile "addons-410014"
	I1212 19:29:18.726652   11017 addons.go:239] Setting addon registry-creds=true in "addons-410014"
	I1212 19:29:18.726681   11017 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:29:18.726875   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:29:18.726891   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:29:18.726915   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:29:18.726987   11017 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-410014"
	I1212 19:29:18.727012   11017 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-410014"
	I1212 19:29:18.727099   11017 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-410014"
	I1212 19:29:18.727120   11017 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-410014"
	I1212 19:29:18.727122   11017 addons.go:70] Setting storage-provisioner=true in profile "addons-410014"
	I1212 19:29:18.727144   11017 addons.go:239] Setting addon storage-provisioner=true in "addons-410014"
	I1212 19:29:18.727164   11017 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:29:18.727167   11017 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:29:18.727171   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:29:18.727295   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:29:18.727342   11017 addons.go:70] Setting registry=true in profile "addons-410014"
	I1212 19:29:18.727362   11017 addons.go:239] Setting addon registry=true in "addons-410014"
	I1212 19:29:18.727385   11017 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:29:18.727621   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:29:18.727665   11017 addons.go:70] Setting volumesnapshots=true in profile "addons-410014"
	I1212 19:29:18.727692   11017 addons.go:239] Setting addon volumesnapshots=true in "addons-410014"
	I1212 19:29:18.727720   11017 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:29:18.727794   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:29:18.728170   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:29:18.729206   11017 addons.go:70] Setting volcano=true in profile "addons-410014"
	I1212 19:29:18.729227   11017 addons.go:239] Setting addon volcano=true in "addons-410014"
	I1212 19:29:18.729263   11017 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:29:18.729762   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:29:18.731010   11017 addons.go:70] Setting gcp-auth=true in profile "addons-410014"
	I1212 19:29:18.731037   11017 mustload.go:66] Loading cluster: addons-410014
	I1212 19:29:18.731220   11017 config.go:182] Loaded profile config "addons-410014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:29:18.731498   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:29:18.727640   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:29:18.732710   11017 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-410014"
	I1212 19:29:18.732834   11017 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-410014"
	I1212 19:29:18.732871   11017 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:29:18.733334   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:29:18.727648   11017 addons.go:70] Setting metrics-server=true in profile "addons-410014"
	I1212 19:29:18.734650   11017 addons.go:239] Setting addon metrics-server=true in "addons-410014"
	I1212 19:29:18.734680   11017 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:29:18.734694   11017 out.go:179] * Verifying Kubernetes components...
	I1212 19:29:18.734913   11017 addons.go:70] Setting default-storageclass=true in profile "addons-410014"
	I1212 19:29:18.734934   11017 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-410014"
	I1212 19:29:18.736099   11017 addons.go:70] Setting ingress=true in profile "addons-410014"
	I1212 19:29:18.736124   11017 addons.go:239] Setting addon ingress=true in "addons-410014"
	I1212 19:29:18.736171   11017 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:29:18.736254   11017 addons.go:70] Setting ingress-dns=true in profile "addons-410014"
	I1212 19:29:18.736325   11017 addons.go:239] Setting addon ingress-dns=true in "addons-410014"
	I1212 19:29:18.726356   11017 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-410014"
	I1212 19:29:18.736576   11017 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:29:18.736615   11017 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:29:18.737043   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:29:18.737069   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:29:18.738348   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:29:18.741495   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:29:18.741935   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:29:18.741989   11017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:29:18.785063   11017 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1212 19:29:18.786755   11017 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1212 19:29:18.786776   11017 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1212 19:29:18.786921   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:18.802684   11017 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1212 19:29:18.806664   11017 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1212 19:29:18.807962   11017 out.go:179]   - Using image docker.io/registry:3.0.0
	I1212 19:29:18.808008   11017 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1212 19:29:18.809564   11017 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1212 19:29:18.811066   11017 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-410014"
	I1212 19:29:18.813532   11017 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:29:18.814057   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:29:18.811359   11017 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1212 19:29:18.814316   11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1212 19:29:18.814371   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:18.813338   11017 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1212 19:29:18.814552   11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1212 19:29:18.814612   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:18.815086   11017 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1212 19:29:18.815099   11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1212 19:29:18.815145   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:18.815205   11017 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1212 19:29:18.816163   11017 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1212 19:29:18.816543   11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1212 19:29:18.816595   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:18.827641   11017 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1212 19:29:18.827662   11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1212 19:29:18.827721   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:18.836044   11017 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 19:29:18.837296   11017 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:29:18.837318   11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 19:29:18.837393   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:18.849498   11017 addons.go:239] Setting addon default-storageclass=true in "addons-410014"
	I1212 19:29:18.849557   11017 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:29:18.850050   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:29:18.851797   11017 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1212 19:29:18.852112   11017 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1212 19:29:18.852506   11017 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1212 19:29:18.853234   11017 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1212 19:29:18.853252   11017 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1212 19:29:18.853318   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:18.855058   11017 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1212 19:29:18.855147   11017 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1212 19:29:18.856513   11017 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1212 19:29:18.857676   11017 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1212 19:29:18.858190   11017 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1212 19:29:18.858239   11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1212 19:29:18.858334   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:18.859802   11017 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1212 19:29:18.862388   11017 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1212 19:29:18.863656   11017 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1212 19:29:18.864749   11017 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1212 19:29:18.865883   11017 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1212 19:29:18.867237   11017 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1212 19:29:18.867308   11017 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1212 19:29:18.867402   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	W1212 19:29:18.869719   11017 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1212 19:29:18.873526   11017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:29:18.875122   11017 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1212 19:29:18.876195   11017 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1212 19:29:18.876212   11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1212 19:29:18.876258   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:18.886996   11017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:29:18.890034   11017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:29:18.899360   11017 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:29:18.902410   11017 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 19:29:18.902990   11017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:29:18.904050   11017 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1212 19:29:18.905386   11017 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1212 19:29:18.905406   11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1212 19:29:18.905456   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:18.905588   11017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:29:18.910480   11017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:29:18.910695   11017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:29:18.912154   11017 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1212 19:29:18.913583   11017 out.go:179]   - Using image docker.io/busybox:stable
	I1212 19:29:18.914053   11017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:29:18.914890   11017 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1212 19:29:18.914907   11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1212 19:29:18.914977   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:18.918120   11017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:29:18.918663   11017 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1212 19:29:18.922025   11017 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 19:29:18.922188   11017 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 19:29:18.922392   11017 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 19:29:18.922675   11017 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 19:29:18.922735   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:18.922830   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:18.936396   11017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:29:18.944846   11017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	W1212 19:29:18.946413   11017 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1212 19:29:18.946464   11017 retry.go:31] will retry after 249.941693ms: ssh: handshake failed: EOF
	I1212 19:29:18.959063   11017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:29:18.964504   11017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	W1212 19:29:18.964546   11017 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1212 19:29:18.965585   11017 retry.go:31] will retry after 204.116261ms: ssh: handshake failed: EOF
	W1212 19:29:18.967014   11017 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1212 19:29:18.969821   11017 retry.go:31] will retry after 165.388419ms: ssh: handshake failed: EOF
	I1212 19:29:18.972329   11017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	W1212 19:29:18.973777   11017 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1212 19:29:18.974934   11017 retry.go:31] will retry after 340.686317ms: ssh: handshake failed: EOF
	I1212 19:29:18.980644   11017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:29:18.987595   11017 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 19:29:19.054675   11017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1212 19:29:19.069475   11017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:29:19.079643   11017 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1212 19:29:19.079665   11017 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1212 19:29:19.079804   11017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1212 19:29:19.083608   11017 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1212 19:29:19.083626   11017 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1212 19:29:19.084846   11017 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1212 19:29:19.084934   11017 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1212 19:29:19.085856   11017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1212 19:29:19.095216   11017 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1212 19:29:19.095244   11017 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1212 19:29:19.095595   11017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1212 19:29:19.105356   11017 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1212 19:29:19.105379   11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1212 19:29:19.110748   11017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1212 19:29:19.118250   11017 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1212 19:29:19.118269   11017 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1212 19:29:19.118747   11017 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1212 19:29:19.118769   11017 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1212 19:29:19.122618   11017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:29:19.129505   11017 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1212 19:29:19.129522   11017 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1212 19:29:19.139775   11017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1212 19:29:19.153070   11017 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1212 19:29:19.153092   11017 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1212 19:29:19.160043   11017 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1212 19:29:19.160065   11017 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1212 19:29:19.176837   11017 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1212 19:29:19.176864   11017 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1212 19:29:19.194895   11017 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1212 19:29:19.194925   11017 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1212 19:29:19.223989   11017 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1212 19:29:19.224017   11017 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1212 19:29:19.231358   11017 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1212 19:29:19.231382   11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1212 19:29:19.238980   11017 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1212 19:29:19.240905   11017 node_ready.go:35] waiting up to 6m0s for node "addons-410014" to be "Ready" ...
	I1212 19:29:19.259668   11017 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 19:29:19.259695   11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1212 19:29:19.261906   11017 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1212 19:29:19.261927   11017 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1212 19:29:19.290954   11017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1212 19:29:19.309440   11017 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1212 19:29:19.309473   11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1212 19:29:19.320815   11017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 19:29:19.336135   11017 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1212 19:29:19.336163   11017 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1212 19:29:19.371479   11017 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 19:29:19.371502   11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1212 19:29:19.373845   11017 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1212 19:29:19.373867   11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1212 19:29:19.375620   11017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1212 19:29:19.401121   11017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1212 19:29:19.422416   11017 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 19:29:19.422445   11017 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 19:29:19.448397   11017 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1212 19:29:19.448439   11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1212 19:29:19.475810   11017 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 19:29:19.475848   11017 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 19:29:19.497128   11017 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1212 19:29:19.497159   11017 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1212 19:29:19.504954   11017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1212 19:29:19.511654   11017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 19:29:19.546680   11017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1212 19:29:19.745526   11017 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-410014" context rescaled to 1 replicas
	I1212 19:29:20.234315   11017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.123522534s)
	I1212 19:29:20.234358   11017 addons.go:495] Verifying addon ingress=true in "addons-410014"
	I1212 19:29:20.234362   11017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.094554054s)
	I1212 19:29:20.234388   11017 addons.go:495] Verifying addon registry=true in "addons-410014"
	I1212 19:29:20.234315   11017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.111663116s)
	I1212 19:29:20.236018   11017 out.go:179] * Verifying ingress addon...
	I1212 19:29:20.236018   11017 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-410014 service yakd-dashboard -n yakd-dashboard
	
	I1212 19:29:20.236120   11017 out.go:179] * Verifying registry addon...
	I1212 19:29:20.238179   11017 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1212 19:29:20.238212   11017 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1212 19:29:20.255256   11017 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1212 19:29:20.255564   11017 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1212 19:29:20.255585   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:20.600702   11017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.27983014s)
	W1212 19:29:20.600748   11017 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1212 19:29:20.600764   11017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.225112921s)
	I1212 19:29:20.600782   11017 retry.go:31] will retry after 322.333052ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1212 19:29:20.600859   11017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.199702909s)
	I1212 19:29:20.600910   11017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.09593367s)
	I1212 19:29:20.600977   11017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.089295322s)
	I1212 19:29:20.600995   11017 addons.go:495] Verifying addon metrics-server=true in "addons-410014"
	I1212 19:29:20.601214   11017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.054486284s)
	I1212 19:29:20.601241   11017 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-410014"
	I1212 19:29:20.603316   11017 out.go:179] * Verifying csi-hostpath-driver addon...
	I1212 19:29:20.605502   11017 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1212 19:29:20.608976   11017 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1212 19:29:20.608995   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:20.741028   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:20.741205   11017 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1212 19:29:20.741222   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:20.923525   11017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 19:29:21.108145   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:21.241246   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:21.241513   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1212 19:29:21.242988   11017 node_ready.go:57] node "addons-410014" has "Ready":"False" status (will retry)
	I1212 19:29:21.608408   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:21.741031   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:21.741154   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:22.107924   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:22.240838   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:22.241043   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:22.608608   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:22.740513   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:22.740551   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:23.107689   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:23.241387   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:23.241588   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:23.372569   11017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.448996301s)
	I1212 19:29:23.607880   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:23.741500   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:23.741784   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1212 19:29:23.743059   11017 node_ready.go:57] node "addons-410014" has "Ready":"False" status (will retry)
	I1212 19:29:24.108540   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:24.240729   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:24.240772   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:24.608895   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:24.741190   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:24.741339   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:25.108794   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:25.240897   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:25.241049   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:25.608600   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:25.740674   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:25.740782   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1212 19:29:25.743085   11017 node_ready.go:57] node "addons-410014" has "Ready":"False" status (will retry)
	I1212 19:29:26.108621   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:26.241213   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:26.241230   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:26.505576   11017 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1212 19:29:26.505634   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:26.522381   11017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:29:26.608328   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:26.620707   11017 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1212 19:29:26.632489   11017 addons.go:239] Setting addon gcp-auth=true in "addons-410014"
	I1212 19:29:26.632537   11017 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:29:26.632859   11017 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:29:26.649620   11017 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1212 19:29:26.649689   11017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:29:26.665255   11017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:29:26.741817   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:26.742014   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:26.756294   11017 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1212 19:29:26.757393   11017 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1212 19:29:26.758502   11017 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1212 19:29:26.758513   11017 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1212 19:29:26.770500   11017 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1212 19:29:26.770517   11017 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1212 19:29:26.782146   11017 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1212 19:29:26.782159   11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1212 19:29:26.793619   11017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1212 19:29:27.069749   11017 addons.go:495] Verifying addon gcp-auth=true in "addons-410014"
	I1212 19:29:27.070989   11017 out.go:179] * Verifying gcp-auth addon...
	I1212 19:29:27.072818   11017 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1212 19:29:27.075607   11017 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1212 19:29:27.075623   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:27.107478   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:27.240731   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:27.240804   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:27.575698   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:27.607618   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:27.740681   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:27.740863   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1212 19:29:27.743317   11017 node_ready.go:57] node "addons-410014" has "Ready":"False" status (will retry)
	I1212 19:29:28.075896   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:28.107836   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:28.241009   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:28.241164   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:28.575970   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:28.607876   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:28.741353   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:28.741565   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:29.075239   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:29.108025   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:29.241170   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:29.241422   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:29.575290   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:29.608630   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:29.740953   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:29.741152   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:30.075028   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:30.107992   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:30.241291   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:30.241369   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1212 19:29:30.242691   11017 node_ready.go:57] node "addons-410014" has "Ready":"False" status (will retry)
	I1212 19:29:30.575030   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:30.607901   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:30.741065   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:30.741100   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:31.074983   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:31.107841   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:31.241166   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:31.241420   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:31.575211   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:31.608406   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:31.740558   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:31.740806   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:32.075724   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:32.107376   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:32.240289   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:32.240482   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1212 19:29:32.242880   11017 node_ready.go:57] node "addons-410014" has "Ready":"False" status (will retry)
	I1212 19:29:32.575388   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:32.608572   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:32.740707   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:32.740910   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:33.076096   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:33.108113   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:33.240531   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:33.240655   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:33.575393   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:33.608513   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:33.741052   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:33.741052   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:34.074994   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:34.107782   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:34.241000   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:34.241206   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:34.576173   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:34.608417   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:34.740845   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:34.740886   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1212 19:29:34.743429   11017 node_ready.go:57] node "addons-410014" has "Ready":"False" status (will retry)
	I1212 19:29:35.075940   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:35.107693   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:35.241375   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:35.241621   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:35.575022   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:35.608185   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:35.741465   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:35.741632   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:36.075091   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:36.107825   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:36.240920   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:36.241080   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:36.576064   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:36.607975   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:36.741348   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:36.741517   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:37.075036   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:37.108141   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:37.241335   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:37.241498   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1212 19:29:37.242714   11017 node_ready.go:57] node "addons-410014" has "Ready":"False" status (will retry)
	I1212 19:29:37.575132   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:37.608485   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:37.740864   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:37.740890   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:38.075870   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:38.107634   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:38.240807   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:38.240970   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:38.576191   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:38.608017   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:38.741256   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:38.741427   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:39.075176   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:39.107965   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:39.241245   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:39.241355   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:39.576241   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:39.608479   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:39.740728   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:39.740888   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1212 19:29:39.743386   11017 node_ready.go:57] node "addons-410014" has "Ready":"False" status (will retry)
	I1212 19:29:40.075821   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:40.107730   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:40.240817   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:40.240999   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:40.575051   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:40.608047   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:40.741167   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:40.741258   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:41.074980   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:41.107724   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:41.240939   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:41.241040   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:41.575867   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:41.607873   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:41.740943   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:41.741086   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:42.075969   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:42.107763   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:42.240702   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:42.240803   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1212 19:29:42.243251   11017 node_ready.go:57] node "addons-410014" has "Ready":"False" status (will retry)
	I1212 19:29:42.575753   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:42.607547   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:42.740785   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:42.740818   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:43.075823   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:43.107822   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:43.240978   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:43.241179   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:43.575079   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:43.608133   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:43.741341   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:43.741549   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:44.074962   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:44.107906   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:44.241209   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:44.241327   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:44.575881   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:44.607800   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:44.740911   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:44.741263   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1212 19:29:44.742569   11017 node_ready.go:57] node "addons-410014" has "Ready":"False" status (will retry)
	I1212 19:29:45.074857   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:45.107887   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:45.240977   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:45.241251   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:45.575054   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:45.607982   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:45.741039   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:45.741269   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:46.075958   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:46.107651   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:46.240826   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:46.240997   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:46.575000   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:46.608006   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:46.741112   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:46.741214   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:47.075729   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:47.107818   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:47.241034   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:47.241215   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1212 19:29:47.242607   11017 node_ready.go:57] node "addons-410014" has "Ready":"False" status (will retry)
	I1212 19:29:47.575814   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:47.607574   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:47.740666   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:47.740791   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:48.075594   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:48.108465   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:48.240789   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:48.240900   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:48.575905   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:48.607801   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:48.740985   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:48.741176   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:49.074672   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:49.107340   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:49.240266   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:49.240412   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1212 19:29:49.243005   11017 node_ready.go:57] node "addons-410014" has "Ready":"False" status (will retry)
	I1212 19:29:49.575574   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:49.608531   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:49.740749   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:49.740846   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:50.075606   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:50.108448   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:50.240528   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:50.240631   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:50.575569   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:50.608196   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:50.740225   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:50.740449   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:51.074908   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:51.107779   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:51.240844   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:51.240996   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:51.575982   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:51.607877   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:51.741381   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:51.741428   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1212 19:29:51.742945   11017 node_ready.go:57] node "addons-410014" has "Ready":"False" status (will retry)
	I1212 19:29:52.075445   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:52.108246   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:52.240291   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:52.240489   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:52.575130   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:52.608216   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:52.740374   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:52.740396   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:53.075445   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:53.108311   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:53.240774   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:53.240812   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:53.575693   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:53.607765   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:53.741053   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:53.741116   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:54.075068   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:54.107896   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:54.240941   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:54.241117   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1212 19:29:54.242689   11017 node_ready.go:57] node "addons-410014" has "Ready":"False" status (will retry)
	I1212 19:29:54.575907   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:54.607885   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:54.740986   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:54.741130   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:55.074874   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:55.107706   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:55.240842   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:55.241116   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:55.576249   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:55.608168   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:55.740399   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:55.740574   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:56.075103   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:56.107932   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:56.241447   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:56.241748   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1212 19:29:56.242822   11017 node_ready.go:57] node "addons-410014" has "Ready":"False" status (will retry)
	I1212 19:29:56.575690   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:56.607477   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:56.740705   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:56.740786   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:57.075413   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:57.108474   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:57.240747   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:57.240798   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:57.575887   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:57.607716   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:57.741104   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:57.741328   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:58.075175   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:58.108114   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:58.240333   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:58.240474   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1212 19:29:58.242867   11017 node_ready.go:57] node "addons-410014" has "Ready":"False" status (will retry)
	I1212 19:29:58.575493   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:58.608355   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:58.740567   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:58.740703   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:59.075379   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:59.108128   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:59.241381   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:59.241602   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:59.577823   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:29:59.610414   11017 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1212 19:29:59.610432   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:29:59.743591   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:29:59.743773   11017 node_ready.go:49] node "addons-410014" is "Ready"
	I1212 19:29:59.743798   11017 node_ready.go:38] duration metric: took 40.502862429s for node "addons-410014" to be "Ready" ...
	I1212 19:29:59.743822   11017 api_server.go:52] waiting for apiserver process to appear ...
	I1212 19:29:59.743876   11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:29:59.743874   11017 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1212 19:29:59.744004   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:29:59.766336   11017 api_server.go:72] duration metric: took 41.040229421s to wait for apiserver process to appear ...
	I1212 19:29:59.766365   11017 api_server.go:88] waiting for apiserver healthz status ...
	I1212 19:29:59.766387   11017 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 19:29:59.771759   11017 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1212 19:29:59.772894   11017 api_server.go:141] control plane version: v1.34.2
	I1212 19:29:59.772924   11017 api_server.go:131] duration metric: took 6.550961ms to wait for apiserver health ...
	I1212 19:29:59.772936   11017 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 19:29:59.846814   11017 system_pods.go:59] 20 kube-system pods found
	I1212 19:29:59.846871   11017 system_pods.go:61] "amd-gpu-device-plugin-t98v8" [78e1b7d3-1dbb-4ef6-83b4-e047490b8d24] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1212 19:29:59.846886   11017 system_pods.go:61] "coredns-66bc5c9577-gnk8c" [dd588b88-e022-4f67-a5af-50af77d298f5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 19:29:59.846904   11017 system_pods.go:61] "csi-hostpath-attacher-0" [7f2b28ab-1a28-4750-9528-1182ed5049c4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1212 19:29:59.846913   11017 system_pods.go:61] "csi-hostpath-resizer-0" [8b3a9abc-856b-4824-948f-e5453e2d51c1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1212 19:29:59.846922   11017 system_pods.go:61] "csi-hostpathplugin-h5gm6" [784a90a6-2593-43ba-9f22-1277078d2606] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1212 19:29:59.846928   11017 system_pods.go:61] "etcd-addons-410014" [2b8eb78b-6c09-471a-80b3-3b4967259475] Running
	I1212 19:29:59.846933   11017 system_pods.go:61] "kindnet-njtv5" [7736d1bc-22c7-4a24-bbc9-dac9a3b91833] Running
	I1212 19:29:59.846938   11017 system_pods.go:61] "kube-apiserver-addons-410014" [de750339-b8a2-4580-a136-db247a033560] Running
	I1212 19:29:59.846944   11017 system_pods.go:61] "kube-controller-manager-addons-410014" [29e83fe6-7e4d-418f-b08f-eb1d6940d87d] Running
	I1212 19:29:59.846952   11017 system_pods.go:61] "kube-ingress-dns-minikube" [e51ad06d-04da-4baf-af51-0454a4a0a8d5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1212 19:29:59.846957   11017 system_pods.go:61] "kube-proxy-z8p4j" [92d8c7ec-abbf-4989-9bf7-effc9afb1c8d] Running
	I1212 19:29:59.846963   11017 system_pods.go:61] "kube-scheduler-addons-410014" [f15b0648-926c-47f2-a4b0-0c59b833bc25] Running
	I1212 19:29:59.846970   11017 system_pods.go:61] "metrics-server-85b7d694d7-kh47q" [3cdc089b-338d-4aa8-95a4-b5ede11fe1b2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 19:29:59.846979   11017 system_pods.go:61] "nvidia-device-plugin-daemonset-qvjjb" [a38c714e-a797-40a2-8341-89a74eaf184e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1212 19:29:59.846988   11017 system_pods.go:61] "registry-6b586f9694-vrszm" [bd0b2d8b-989d-4909-9db8-2993ac9f26f3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1212 19:29:59.847013   11017 system_pods.go:61] "registry-creds-764b6fb674-j88nd" [8b8f037f-1ba2-44dc-90c5-1575e1dc8c8b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1212 19:29:59.847021   11017 system_pods.go:61] "registry-proxy-5lrqf" [4f1b686d-49b1-4fe4-a2ac-d475882292bb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1212 19:29:59.847029   11017 system_pods.go:61] "snapshot-controller-7d9fbc56b8-ngq92" [972ebf19-5f22-4eff-a9f9-3f7871840abc] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 19:29:59.847040   11017 system_pods.go:61] "snapshot-controller-7d9fbc56b8-nlxtw" [44c2c8a6-de6c-4940-84f7-51995b8ba442] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 19:29:59.847048   11017 system_pods.go:61] "storage-provisioner" [0149cd6a-d0e2-4856-bc29-8c4ee8117fb8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 19:29:59.847057   11017 system_pods.go:74] duration metric: took 74.11299ms to wait for pod list to return data ...
	I1212 19:29:59.847068   11017 default_sa.go:34] waiting for default service account to be created ...
	I1212 19:29:59.849937   11017 default_sa.go:45] found service account: "default"
	I1212 19:29:59.850088   11017 default_sa.go:55] duration metric: took 2.901806ms for default service account to be created ...
	I1212 19:29:59.850174   11017 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 19:29:59.945677   11017 system_pods.go:86] 20 kube-system pods found
	I1212 19:29:59.945709   11017 system_pods.go:89] "amd-gpu-device-plugin-t98v8" [78e1b7d3-1dbb-4ef6-83b4-e047490b8d24] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1212 19:29:59.945716   11017 system_pods.go:89] "coredns-66bc5c9577-gnk8c" [dd588b88-e022-4f67-a5af-50af77d298f5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 19:29:59.945723   11017 system_pods.go:89] "csi-hostpath-attacher-0" [7f2b28ab-1a28-4750-9528-1182ed5049c4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1212 19:29:59.945729   11017 system_pods.go:89] "csi-hostpath-resizer-0" [8b3a9abc-856b-4824-948f-e5453e2d51c1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1212 19:29:59.945734   11017 system_pods.go:89] "csi-hostpathplugin-h5gm6" [784a90a6-2593-43ba-9f22-1277078d2606] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1212 19:29:59.945739   11017 system_pods.go:89] "etcd-addons-410014" [2b8eb78b-6c09-471a-80b3-3b4967259475] Running
	I1212 19:29:59.945743   11017 system_pods.go:89] "kindnet-njtv5" [7736d1bc-22c7-4a24-bbc9-dac9a3b91833] Running
	I1212 19:29:59.945746   11017 system_pods.go:89] "kube-apiserver-addons-410014" [de750339-b8a2-4580-a136-db247a033560] Running
	I1212 19:29:59.945750   11017 system_pods.go:89] "kube-controller-manager-addons-410014" [29e83fe6-7e4d-418f-b08f-eb1d6940d87d] Running
	I1212 19:29:59.945755   11017 system_pods.go:89] "kube-ingress-dns-minikube" [e51ad06d-04da-4baf-af51-0454a4a0a8d5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1212 19:29:59.945763   11017 system_pods.go:89] "kube-proxy-z8p4j" [92d8c7ec-abbf-4989-9bf7-effc9afb1c8d] Running
	I1212 19:29:59.945766   11017 system_pods.go:89] "kube-scheduler-addons-410014" [f15b0648-926c-47f2-a4b0-0c59b833bc25] Running
	I1212 19:29:59.945771   11017 system_pods.go:89] "metrics-server-85b7d694d7-kh47q" [3cdc089b-338d-4aa8-95a4-b5ede11fe1b2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 19:29:59.945777   11017 system_pods.go:89] "nvidia-device-plugin-daemonset-qvjjb" [a38c714e-a797-40a2-8341-89a74eaf184e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1212 19:29:59.945783   11017 system_pods.go:89] "registry-6b586f9694-vrszm" [bd0b2d8b-989d-4909-9db8-2993ac9f26f3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1212 19:29:59.945791   11017 system_pods.go:89] "registry-creds-764b6fb674-j88nd" [8b8f037f-1ba2-44dc-90c5-1575e1dc8c8b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1212 19:29:59.945796   11017 system_pods.go:89] "registry-proxy-5lrqf" [4f1b686d-49b1-4fe4-a2ac-d475882292bb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1212 19:29:59.945802   11017 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ngq92" [972ebf19-5f22-4eff-a9f9-3f7871840abc] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 19:29:59.945808   11017 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nlxtw" [44c2c8a6-de6c-4940-84f7-51995b8ba442] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 19:29:59.945815   11017 system_pods.go:89] "storage-provisioner" [0149cd6a-d0e2-4856-bc29-8c4ee8117fb8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 19:29:59.945831   11017 retry.go:31] will retry after 216.29259ms: missing components: kube-dns
	I1212 19:30:00.075616   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:00.108858   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:00.166772   11017 system_pods.go:86] 20 kube-system pods found
	I1212 19:30:00.166810   11017 system_pods.go:89] "amd-gpu-device-plugin-t98v8" [78e1b7d3-1dbb-4ef6-83b4-e047490b8d24] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1212 19:30:00.166822   11017 system_pods.go:89] "coredns-66bc5c9577-gnk8c" [dd588b88-e022-4f67-a5af-50af77d298f5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 19:30:00.166835   11017 system_pods.go:89] "csi-hostpath-attacher-0" [7f2b28ab-1a28-4750-9528-1182ed5049c4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1212 19:30:00.166843   11017 system_pods.go:89] "csi-hostpath-resizer-0" [8b3a9abc-856b-4824-948f-e5453e2d51c1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1212 19:30:00.166851   11017 system_pods.go:89] "csi-hostpathplugin-h5gm6" [784a90a6-2593-43ba-9f22-1277078d2606] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1212 19:30:00.166858   11017 system_pods.go:89] "etcd-addons-410014" [2b8eb78b-6c09-471a-80b3-3b4967259475] Running
	I1212 19:30:00.166865   11017 system_pods.go:89] "kindnet-njtv5" [7736d1bc-22c7-4a24-bbc9-dac9a3b91833] Running
	I1212 19:30:00.166870   11017 system_pods.go:89] "kube-apiserver-addons-410014" [de750339-b8a2-4580-a136-db247a033560] Running
	I1212 19:30:00.166875   11017 system_pods.go:89] "kube-controller-manager-addons-410014" [29e83fe6-7e4d-418f-b08f-eb1d6940d87d] Running
	I1212 19:30:00.166884   11017 system_pods.go:89] "kube-ingress-dns-minikube" [e51ad06d-04da-4baf-af51-0454a4a0a8d5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1212 19:30:00.166889   11017 system_pods.go:89] "kube-proxy-z8p4j" [92d8c7ec-abbf-4989-9bf7-effc9afb1c8d] Running
	I1212 19:30:00.166895   11017 system_pods.go:89] "kube-scheduler-addons-410014" [f15b0648-926c-47f2-a4b0-0c59b833bc25] Running
	I1212 19:30:00.166902   11017 system_pods.go:89] "metrics-server-85b7d694d7-kh47q" [3cdc089b-338d-4aa8-95a4-b5ede11fe1b2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 19:30:00.166910   11017 system_pods.go:89] "nvidia-device-plugin-daemonset-qvjjb" [a38c714e-a797-40a2-8341-89a74eaf184e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1212 19:30:00.166920   11017 system_pods.go:89] "registry-6b586f9694-vrszm" [bd0b2d8b-989d-4909-9db8-2993ac9f26f3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1212 19:30:00.166928   11017 system_pods.go:89] "registry-creds-764b6fb674-j88nd" [8b8f037f-1ba2-44dc-90c5-1575e1dc8c8b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1212 19:30:00.166935   11017 system_pods.go:89] "registry-proxy-5lrqf" [4f1b686d-49b1-4fe4-a2ac-d475882292bb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1212 19:30:00.166945   11017 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ngq92" [972ebf19-5f22-4eff-a9f9-3f7871840abc] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 19:30:00.166954   11017 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nlxtw" [44c2c8a6-de6c-4940-84f7-51995b8ba442] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 19:30:00.166962   11017 system_pods.go:89] "storage-provisioner" [0149cd6a-d0e2-4856-bc29-8c4ee8117fb8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 19:30:00.166979   11017 retry.go:31] will retry after 367.633293ms: missing components: kube-dns
	I1212 19:30:00.241621   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:00.241658   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:00.538726   11017 system_pods.go:86] 20 kube-system pods found
	I1212 19:30:00.538761   11017 system_pods.go:89] "amd-gpu-device-plugin-t98v8" [78e1b7d3-1dbb-4ef6-83b4-e047490b8d24] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1212 19:30:00.538772   11017 system_pods.go:89] "coredns-66bc5c9577-gnk8c" [dd588b88-e022-4f67-a5af-50af77d298f5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 19:30:00.538782   11017 system_pods.go:89] "csi-hostpath-attacher-0" [7f2b28ab-1a28-4750-9528-1182ed5049c4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1212 19:30:00.538790   11017 system_pods.go:89] "csi-hostpath-resizer-0" [8b3a9abc-856b-4824-948f-e5453e2d51c1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1212 19:30:00.538799   11017 system_pods.go:89] "csi-hostpathplugin-h5gm6" [784a90a6-2593-43ba-9f22-1277078d2606] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1212 19:30:00.538805   11017 system_pods.go:89] "etcd-addons-410014" [2b8eb78b-6c09-471a-80b3-3b4967259475] Running
	I1212 19:30:00.538814   11017 system_pods.go:89] "kindnet-njtv5" [7736d1bc-22c7-4a24-bbc9-dac9a3b91833] Running
	I1212 19:30:00.538821   11017 system_pods.go:89] "kube-apiserver-addons-410014" [de750339-b8a2-4580-a136-db247a033560] Running
	I1212 19:30:00.538827   11017 system_pods.go:89] "kube-controller-manager-addons-410014" [29e83fe6-7e4d-418f-b08f-eb1d6940d87d] Running
	I1212 19:30:00.538836   11017 system_pods.go:89] "kube-ingress-dns-minikube" [e51ad06d-04da-4baf-af51-0454a4a0a8d5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1212 19:30:00.538843   11017 system_pods.go:89] "kube-proxy-z8p4j" [92d8c7ec-abbf-4989-9bf7-effc9afb1c8d] Running
	I1212 19:30:00.538850   11017 system_pods.go:89] "kube-scheduler-addons-410014" [f15b0648-926c-47f2-a4b0-0c59b833bc25] Running
	I1212 19:30:00.538859   11017 system_pods.go:89] "metrics-server-85b7d694d7-kh47q" [3cdc089b-338d-4aa8-95a4-b5ede11fe1b2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 19:30:00.538880   11017 system_pods.go:89] "nvidia-device-plugin-daemonset-qvjjb" [a38c714e-a797-40a2-8341-89a74eaf184e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1212 19:30:00.538892   11017 system_pods.go:89] "registry-6b586f9694-vrszm" [bd0b2d8b-989d-4909-9db8-2993ac9f26f3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1212 19:30:00.538902   11017 system_pods.go:89] "registry-creds-764b6fb674-j88nd" [8b8f037f-1ba2-44dc-90c5-1575e1dc8c8b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1212 19:30:00.538913   11017 system_pods.go:89] "registry-proxy-5lrqf" [4f1b686d-49b1-4fe4-a2ac-d475882292bb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1212 19:30:00.538922   11017 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ngq92" [972ebf19-5f22-4eff-a9f9-3f7871840abc] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 19:30:00.538934   11017 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nlxtw" [44c2c8a6-de6c-4940-84f7-51995b8ba442] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 19:30:00.538945   11017 system_pods.go:89] "storage-provisioner" [0149cd6a-d0e2-4856-bc29-8c4ee8117fb8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 19:30:00.538969   11017 retry.go:31] will retry after 364.206268ms: missing components: kube-dns
	I1212 19:30:00.575798   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:00.608552   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:00.741087   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:00.741250   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:00.908205   11017 system_pods.go:86] 20 kube-system pods found
	I1212 19:30:00.908237   11017 system_pods.go:89] "amd-gpu-device-plugin-t98v8" [78e1b7d3-1dbb-4ef6-83b4-e047490b8d24] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1212 19:30:00.908246   11017 system_pods.go:89] "coredns-66bc5c9577-gnk8c" [dd588b88-e022-4f67-a5af-50af77d298f5] Running
	I1212 19:30:00.908256   11017 system_pods.go:89] "csi-hostpath-attacher-0" [7f2b28ab-1a28-4750-9528-1182ed5049c4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1212 19:30:00.908264   11017 system_pods.go:89] "csi-hostpath-resizer-0" [8b3a9abc-856b-4824-948f-e5453e2d51c1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1212 19:30:00.908283   11017 system_pods.go:89] "csi-hostpathplugin-h5gm6" [784a90a6-2593-43ba-9f22-1277078d2606] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1212 19:30:00.908292   11017 system_pods.go:89] "etcd-addons-410014" [2b8eb78b-6c09-471a-80b3-3b4967259475] Running
	I1212 19:30:00.908296   11017 system_pods.go:89] "kindnet-njtv5" [7736d1bc-22c7-4a24-bbc9-dac9a3b91833] Running
	I1212 19:30:00.908301   11017 system_pods.go:89] "kube-apiserver-addons-410014" [de750339-b8a2-4580-a136-db247a033560] Running
	I1212 19:30:00.908305   11017 system_pods.go:89] "kube-controller-manager-addons-410014" [29e83fe6-7e4d-418f-b08f-eb1d6940d87d] Running
	I1212 19:30:00.908312   11017 system_pods.go:89] "kube-ingress-dns-minikube" [e51ad06d-04da-4baf-af51-0454a4a0a8d5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1212 19:30:00.908316   11017 system_pods.go:89] "kube-proxy-z8p4j" [92d8c7ec-abbf-4989-9bf7-effc9afb1c8d] Running
	I1212 19:30:00.908320   11017 system_pods.go:89] "kube-scheduler-addons-410014" [f15b0648-926c-47f2-a4b0-0c59b833bc25] Running
	I1212 19:30:00.908325   11017 system_pods.go:89] "metrics-server-85b7d694d7-kh47q" [3cdc089b-338d-4aa8-95a4-b5ede11fe1b2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 19:30:00.908338   11017 system_pods.go:89] "nvidia-device-plugin-daemonset-qvjjb" [a38c714e-a797-40a2-8341-89a74eaf184e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1212 19:30:00.908350   11017 system_pods.go:89] "registry-6b586f9694-vrszm" [bd0b2d8b-989d-4909-9db8-2993ac9f26f3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1212 19:30:00.908359   11017 system_pods.go:89] "registry-creds-764b6fb674-j88nd" [8b8f037f-1ba2-44dc-90c5-1575e1dc8c8b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1212 19:30:00.908368   11017 system_pods.go:89] "registry-proxy-5lrqf" [4f1b686d-49b1-4fe4-a2ac-d475882292bb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1212 19:30:00.908376   11017 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ngq92" [972ebf19-5f22-4eff-a9f9-3f7871840abc] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 19:30:00.908387   11017 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nlxtw" [44c2c8a6-de6c-4940-84f7-51995b8ba442] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 19:30:00.908390   11017 system_pods.go:89] "storage-provisioner" [0149cd6a-d0e2-4856-bc29-8c4ee8117fb8] Running
	I1212 19:30:00.908398   11017 system_pods.go:126] duration metric: took 1.058216593s to wait for k8s-apps to be running ...
	I1212 19:30:00.908405   11017 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 19:30:00.908448   11017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 19:30:00.921362   11017 system_svc.go:56] duration metric: took 12.947763ms WaitForService to wait for kubelet
	I1212 19:30:00.921385   11017 kubeadm.go:587] duration metric: took 42.195283s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 19:30:00.921406   11017 node_conditions.go:102] verifying NodePressure condition ...
	I1212 19:30:00.923714   11017 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1212 19:30:00.923735   11017 node_conditions.go:123] node cpu capacity is 8
	I1212 19:30:00.923748   11017 node_conditions.go:105] duration metric: took 2.335881ms to run NodePressure ...
	I1212 19:30:00.923758   11017 start.go:242] waiting for startup goroutines ...
	I1212 19:30:01.076667   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:01.177655   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:01.277848   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:01.277896   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:01.576242   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:01.608789   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:01.741265   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:01.741454   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:02.076217   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:02.108325   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:02.241685   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:02.241751   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:02.576991   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:02.608515   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:02.742048   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:02.743636   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:03.076475   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:03.177808   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:03.278079   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:03.278097   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:03.576259   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:03.609049   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:03.741812   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:03.741991   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:04.075865   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:04.108558   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:04.241233   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:04.241345   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:04.575820   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:04.608994   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:04.741593   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:04.741719   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:05.076495   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:05.108706   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:05.240999   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:05.241062   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:05.575785   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:05.607943   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:05.741226   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:05.741375   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:06.075664   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:06.107653   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:06.240927   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:06.241070   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:06.575475   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:06.608522   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:06.740911   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:06.740940   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:07.075348   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:07.108741   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:07.240959   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:07.241003   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:07.575392   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:07.608506   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:07.740624   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:07.740837   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:08.076082   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:08.108257   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:08.240655   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:08.240761   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:08.575314   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:08.608340   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:08.740639   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:08.740750   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:09.075083   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:09.108327   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:09.240387   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:09.240459   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:09.575907   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:09.608108   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:09.741707   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:09.741773   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:10.075186   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:10.108044   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:10.240836   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:10.241068   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:10.575164   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:10.608448   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:10.740318   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:10.740339   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:11.075638   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:11.107640   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:11.240912   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:11.240927   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:11.575347   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:11.608399   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:11.740613   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:11.740691   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:12.075066   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:12.108016   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:12.241247   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:12.241317   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:12.575764   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:12.607877   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:12.741251   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:12.741308   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:13.075416   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:13.108699   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:13.241202   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:13.241319   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:13.575932   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:13.608815   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:13.741197   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:13.741197   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:14.075780   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:14.107849   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:14.241708   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:14.241707   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:14.574858   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:14.607935   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:14.741303   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:14.741380   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:15.075442   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:15.108568   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:15.240770   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:15.240876   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:15.575212   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:15.608243   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:15.741297   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:15.741415   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:16.076054   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:16.107996   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:16.241442   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:16.241578   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:16.576007   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:16.608320   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:16.741554   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:16.741661   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:17.076154   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:17.108247   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:17.241608   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:17.241646   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:17.575916   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:17.608031   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:17.740978   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:17.741109   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:18.075668   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:18.107877   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:18.241574   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:18.241627   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:18.576236   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:18.608490   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:18.740548   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:18.740613   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:19.075328   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:19.108359   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:19.240581   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:19.240924   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:19.575613   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:19.607777   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:19.740919   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:19.740971   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:20.075561   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:20.108612   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:20.240951   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:20.241047   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:20.575141   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:20.608340   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:20.740591   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:20.740672   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:21.076002   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:21.108104   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:21.241150   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:21.241228   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:21.575672   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:21.607774   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:21.741066   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:21.741102   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:22.075402   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:22.110400   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:22.240508   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:22.240529   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:22.575883   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:22.608058   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:22.741254   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:22.741345   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:23.075513   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:23.108760   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:23.241202   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:23.241234   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:23.575545   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:23.608569   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:23.740982   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:23.741053   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:24.075753   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:24.107676   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:24.241037   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:24.241269   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:24.575650   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:24.607556   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:24.740659   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:24.740765   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:25.074833   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:25.107866   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:25.241064   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:25.241100   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:25.575504   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:25.608671   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:25.740838   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:25.740872   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:26.075401   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:26.108527   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:26.240827   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:26.241037   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:26.575358   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:26.608652   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:26.740953   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:26.741087   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:27.075214   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:27.108208   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:27.241441   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:27.241517   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:27.575866   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:27.607844   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:27.741145   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:27.741151   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:28.075706   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:28.107957   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:28.241157   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:28.241383   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:28.575720   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:28.607842   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:28.741155   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:28.741161   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:29.075577   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:29.108446   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:29.240619   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:29.240719   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:29.575030   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:29.608055   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:29.741294   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:29.741294   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:30.075798   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:30.107810   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:30.240864   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:30.240978   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:30.575535   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:30.608578   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:30.740622   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:30.740874   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:31.075843   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:31.107851   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:31.241223   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:31.241262   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:31.576238   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:31.608417   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:31.740493   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:31.740630   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:32.075759   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:32.108609   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:32.240634   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:32.240657   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:32.576183   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:32.608489   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:32.740589   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:32.740720   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:33.075813   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:33.107906   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:33.241243   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:33.241254   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:33.575354   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:33.608424   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:33.740475   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:33.740575   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:34.075958   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:34.107990   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:34.241034   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:34.241152   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:34.575641   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:34.607797   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:34.740920   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:34.740926   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:35.075135   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:35.108071   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:35.241232   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:35.241464   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:35.575832   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:35.608053   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:35.741171   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:35.741406   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:36.075556   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:36.107938   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:36.241552   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:36.241566   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:36.576234   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:36.608603   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:36.740858   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:36.741042   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:37.074886   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:37.108755   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:37.241477   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:37.241511   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:37.576532   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:37.609652   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:37.741684   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:37.741929   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:38.075643   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:38.108124   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:38.241812   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:38.241922   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:38.575588   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:38.609080   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:38.741815   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:38.741861   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:39.075606   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:39.109327   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:39.242694   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:39.242733   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:39.575100   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:39.608452   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:39.740765   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:39.740896   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:40.076842   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:40.108921   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:40.242983   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:40.243615   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:40.575439   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:40.609097   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:40.741822   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:40.741844   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:41.075908   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:41.108117   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:41.241828   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:41.241923   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:41.575182   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:41.608858   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:41.740909   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:41.741147   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:42.075942   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:42.108548   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:42.241133   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:42.241219   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:42.575900   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:42.608902   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:42.741258   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:42.741364   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:43.075447   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:43.109628   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:43.241261   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:43.241429   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:43.575839   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:43.608209   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:43.742079   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:43.742153   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:44.076195   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:44.108792   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:44.241477   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:44.241620   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:44.576101   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:44.609048   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:44.742039   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:44.742137   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:45.075420   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:45.108754   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:45.241297   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:45.241474   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:45.575974   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:45.608887   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:45.742198   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:45.742947   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:46.075574   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:46.204305   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:46.241543   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:46.241650   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:46.576346   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:46.610453   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:46.740766   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:46.740916   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:47.076248   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:47.108841   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:47.241487   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:47.241522   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:47.575993   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:47.607982   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:47.741375   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:47.741409   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:48.076143   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:48.108991   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:48.241970   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:48.241966   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:48.575609   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:48.607761   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:48.741196   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:48.741291   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:49.076233   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:49.108639   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:49.241235   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:49.241402   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:49.576113   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:49.608666   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:49.741389   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:49.741555   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:50.076054   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:50.108758   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:50.241500   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:50.241665   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:50.575803   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:50.608499   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:50.741134   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:50.741162   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:51.075603   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:51.109237   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:51.241927   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:51.241927   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:51.575170   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:51.608321   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:51.740617   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:51.740678   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:52.075682   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:52.108250   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:52.242177   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:52.242220   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:52.577093   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:52.609930   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:52.742162   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:52.742229   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:53.075725   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:53.108464   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:53.241347   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:53.241520   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:53.576353   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:53.608546   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:53.740921   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:53.741019   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:54.075171   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:54.108142   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:54.241989   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:54.242039   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:54.576605   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:54.609205   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:54.741695   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:54.741882   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:55.075069   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:55.108005   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:55.241285   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:55.241300   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:55.575597   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:55.608461   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:55.741040   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:55.741204   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:56.075599   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:56.176982   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:56.241731   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:56.241783   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:56.575392   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:56.608990   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:56.741747   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:56.741799   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:57.075094   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:57.108494   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:57.240642   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:57.240683   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:57.575441   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:57.609519   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:57.741926   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:57.742532   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:58.075451   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:58.109875   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:58.241784   11017 kapi.go:107] duration metric: took 1m38.003601007s to wait for kubernetes.io/minikube-addons=registry ...
	I1212 19:30:58.241891   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:58.576875   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:58.608725   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:58.741228   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:59.075948   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:59.108384   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:59.240841   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:59.575482   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:59.608814   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:59.741606   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:00.136011   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:00.136125   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:00.241116   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:00.576094   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:00.608608   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:00.741199   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:01.075992   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:01.108425   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:01.241581   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:01.576422   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:01.608435   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:01.741576   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:02.076746   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:02.177885   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:02.241771   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:02.575476   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:02.608806   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:02.741011   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:03.076115   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:03.108698   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:03.241738   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:03.600484   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:03.632926   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:03.741804   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:04.076842   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:04.108680   11017 kapi.go:107] duration metric: took 1m43.503172766s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1212 19:31:04.242628   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:04.575751   11017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:04.741518   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:05.076136   11017 kapi.go:107] duration metric: took 1m38.003317365s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1212 19:31:05.077412   11017 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-410014 cluster.
	I1212 19:31:05.078441   11017 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1212 19:31:05.079448   11017 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1212 19:31:05.243131   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:05.741679   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:06.241825   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:06.740602   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:07.241811   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:07.741248   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:08.268297   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:08.741457   11017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:09.242106   11017 kapi.go:107] duration metric: took 1m49.003890643s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1212 19:31:09.246377   11017 out.go:179] * Enabled addons: registry-creds, storage-provisioner, ingress-dns, cloud-spanner, nvidia-device-plugin, yakd, default-storageclass, amd-gpu-device-plugin, inspektor-gadget, metrics-server, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I1212 19:31:09.247679   11017 addons.go:530] duration metric: took 1m50.521544274s for enable addons: enabled=[registry-creds storage-provisioner ingress-dns cloud-spanner nvidia-device-plugin yakd default-storageclass amd-gpu-device-plugin inspektor-gadget metrics-server storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I1212 19:31:09.247724   11017 start.go:247] waiting for cluster config update ...
	I1212 19:31:09.247750   11017 start.go:256] writing updated cluster config ...
	I1212 19:31:09.248014   11017 ssh_runner.go:195] Run: rm -f paused
	I1212 19:31:09.251916   11017 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 19:31:09.254911   11017 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gnk8c" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 19:31:09.258590   11017 pod_ready.go:94] pod "coredns-66bc5c9577-gnk8c" is "Ready"
	I1212 19:31:09.258608   11017 pod_ready.go:86] duration metric: took 3.673079ms for pod "coredns-66bc5c9577-gnk8c" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 19:31:09.260217   11017 pod_ready.go:83] waiting for pod "etcd-addons-410014" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 19:31:09.263582   11017 pod_ready.go:94] pod "etcd-addons-410014" is "Ready"
	I1212 19:31:09.263603   11017 pod_ready.go:86] duration metric: took 3.371109ms for pod "etcd-addons-410014" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 19:31:09.265327   11017 pod_ready.go:83] waiting for pod "kube-apiserver-addons-410014" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 19:31:09.268508   11017 pod_ready.go:94] pod "kube-apiserver-addons-410014" is "Ready"
	I1212 19:31:09.268526   11017 pod_ready.go:86] duration metric: took 3.181394ms for pod "kube-apiserver-addons-410014" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 19:31:09.269998   11017 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-410014" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 19:31:09.654877   11017 pod_ready.go:94] pod "kube-controller-manager-addons-410014" is "Ready"
	I1212 19:31:09.654900   11017 pod_ready.go:86] duration metric: took 384.887765ms for pod "kube-controller-manager-addons-410014" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 19:31:09.855381   11017 pod_ready.go:83] waiting for pod "kube-proxy-z8p4j" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 19:31:10.255089   11017 pod_ready.go:94] pod "kube-proxy-z8p4j" is "Ready"
	I1212 19:31:10.255111   11017 pod_ready.go:86] duration metric: took 399.708398ms for pod "kube-proxy-z8p4j" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 19:31:10.455906   11017 pod_ready.go:83] waiting for pod "kube-scheduler-addons-410014" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 19:31:10.855494   11017 pod_ready.go:94] pod "kube-scheduler-addons-410014" is "Ready"
	I1212 19:31:10.855518   11017 pod_ready.go:86] duration metric: took 399.59016ms for pod "kube-scheduler-addons-410014" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 19:31:10.855529   11017 pod_ready.go:40] duration metric: took 1.603579105s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 19:31:10.899564   11017 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1212 19:31:10.901406   11017 out.go:179] * Done! kubectl is now configured to use "addons-410014" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 12 19:31:09 addons-410014 crio[776]: time="2025-12-12T19:31:09.976105918Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-admission-patch-k6zbn from CNI network \"kindnet\" (type=ptp)"
	Dec 12 19:31:10 addons-410014 crio[776]: time="2025-12-12T19:31:10.004340553Z" level=info msg="Stopped pod sandbox: 53bcc506bb00cf265bf963b8b1da289b178c1cfca286b6c12c06756768c21eac" id=94323621-ba05-403c-8bb9-42f4799e1c88 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 12 19:31:11 addons-410014 crio[776]: time="2025-12-12T19:31:11.697761118Z" level=info msg="Running pod sandbox: default/busybox/POD" id=5bd42459-e000-4feb-ac4d-93e0df72a737 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 19:31:11 addons-410014 crio[776]: time="2025-12-12T19:31:11.697840972Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 19:31:11 addons-410014 crio[776]: time="2025-12-12T19:31:11.704406779Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:e06d0ea517eb6cb45ebecf42bdec6b5546807e63b176dc7808e4fb4c435a4b69 UID:616944ae-2125-4437-bf51-6aa3067feb79 NetNS:/var/run/netns/56a13e02-046f-4f9e-8774-1d6e5335b326 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000812a28}] Aliases:map[]}"
	Dec 12 19:31:11 addons-410014 crio[776]: time="2025-12-12T19:31:11.704431308Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 12 19:31:11 addons-410014 crio[776]: time="2025-12-12T19:31:11.713358693Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:e06d0ea517eb6cb45ebecf42bdec6b5546807e63b176dc7808e4fb4c435a4b69 UID:616944ae-2125-4437-bf51-6aa3067feb79 NetNS:/var/run/netns/56a13e02-046f-4f9e-8774-1d6e5335b326 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000812a28}] Aliases:map[]}"
	Dec 12 19:31:11 addons-410014 crio[776]: time="2025-12-12T19:31:11.71348873Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 12 19:31:11 addons-410014 crio[776]: time="2025-12-12T19:31:11.714346978Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 12 19:31:11 addons-410014 crio[776]: time="2025-12-12T19:31:11.715558089Z" level=info msg="Ran pod sandbox e06d0ea517eb6cb45ebecf42bdec6b5546807e63b176dc7808e4fb4c435a4b69 with infra container: default/busybox/POD" id=5bd42459-e000-4feb-ac4d-93e0df72a737 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 19:31:11 addons-410014 crio[776]: time="2025-12-12T19:31:11.716671605Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=972b8a82-e6f7-4fb2-a984-82871ed79398 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 19:31:11 addons-410014 crio[776]: time="2025-12-12T19:31:11.716770887Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=972b8a82-e6f7-4fb2-a984-82871ed79398 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 19:31:11 addons-410014 crio[776]: time="2025-12-12T19:31:11.716802744Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=972b8a82-e6f7-4fb2-a984-82871ed79398 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 19:31:11 addons-410014 crio[776]: time="2025-12-12T19:31:11.717349535Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c2af9293-dd75-4223-ad7e-0d1d1fb7b8aa name=/runtime.v1.ImageService/PullImage
	Dec 12 19:31:11 addons-410014 crio[776]: time="2025-12-12T19:31:11.718810077Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 12 19:31:12 addons-410014 crio[776]: time="2025-12-12T19:31:12.337883495Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=c2af9293-dd75-4223-ad7e-0d1d1fb7b8aa name=/runtime.v1.ImageService/PullImage
	Dec 12 19:31:12 addons-410014 crio[776]: time="2025-12-12T19:31:12.338376979Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=abf2b363-1afa-45bf-b4ad-b047cdc1b707 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 19:31:12 addons-410014 crio[776]: time="2025-12-12T19:31:12.339521772Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=22ff5c0a-502c-4f70-823d-d9fa0c96e65d name=/runtime.v1.ImageService/ImageStatus
	Dec 12 19:31:12 addons-410014 crio[776]: time="2025-12-12T19:31:12.342962705Z" level=info msg="Creating container: default/busybox/busybox" id=e61cb03a-a6af-4e83-94be-8fdf89966c3d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 19:31:12 addons-410014 crio[776]: time="2025-12-12T19:31:12.343060607Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 19:31:12 addons-410014 crio[776]: time="2025-12-12T19:31:12.347961284Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 19:31:12 addons-410014 crio[776]: time="2025-12-12T19:31:12.348403471Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 19:31:12 addons-410014 crio[776]: time="2025-12-12T19:31:12.378404085Z" level=info msg="Created container ce6f0643a64027d439e350616d771809117bfb7449795dbaa258856cddbb3489: default/busybox/busybox" id=e61cb03a-a6af-4e83-94be-8fdf89966c3d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 19:31:12 addons-410014 crio[776]: time="2025-12-12T19:31:12.378908338Z" level=info msg="Starting container: ce6f0643a64027d439e350616d771809117bfb7449795dbaa258856cddbb3489" id=0da3e183-ec6d-4425-9f5a-73bd0053f6ff name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 19:31:12 addons-410014 crio[776]: time="2025-12-12T19:31:12.380678922Z" level=info msg="Started container" PID=6227 containerID=ce6f0643a64027d439e350616d771809117bfb7449795dbaa258856cddbb3489 description=default/busybox/busybox id=0da3e183-ec6d-4425-9f5a-73bd0053f6ff name=/runtime.v1.RuntimeService/StartContainer sandboxID=e06d0ea517eb6cb45ebecf42bdec6b5546807e63b176dc7808e4fb4c435a4b69
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	ce6f0643a6402       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          8 seconds ago        Running             busybox                                  0                   e06d0ea517eb6       busybox                                     default
	b49b6518ed002       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad                             12 seconds ago       Running             controller                               0                   ddaa16da73f43       ingress-nginx-controller-85d4c799dd-vgkhr   ingress-nginx
	53f30551a589c       a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e                                                                             12 seconds ago       Exited              patch                                    2                   53bcc506bb00c       ingress-nginx-admission-patch-k6zbn         ingress-nginx
	ebb75d365f34a       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 16 seconds ago       Running             gcp-auth                                 0                   589690ecdb123       gcp-auth-78565c9fb4-pl6ld                   gcp-auth
	76571c6136b47       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          17 seconds ago       Running             csi-snapshotter                          0                   8f747c84631fc       csi-hostpathplugin-h5gm6                    kube-system
	7f1863e417224       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          18 seconds ago       Running             csi-provisioner                          0                   8f747c84631fc       csi-hostpathplugin-h5gm6                    kube-system
	5cd7aec5d9bbe       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            19 seconds ago       Running             liveness-probe                           0                   8f747c84631fc       csi-hostpathplugin-h5gm6                    kube-system
	9d3792f634584       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           20 seconds ago       Running             hostpath                                 0                   8f747c84631fc       csi-hostpathplugin-h5gm6                    kube-system
	e9263571afd91       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                21 seconds ago       Running             node-driver-registrar                    0                   8f747c84631fc       csi-hostpathplugin-h5gm6                    kube-system
	63bb623321fdc       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:ea428be7b01d41418fca4d91ae3dff6b037bdc0d42757e7ad392a38536488a1a                            21 seconds ago       Running             gadget                                   0                   3a4bf87c35c06       gadget-pd42c                                gadget
	cb6005d68d9a9       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              24 seconds ago       Running             registry-proxy                           0                   cc2003f2279cc       registry-proxy-5lrqf                        kube-system
	69c83a9d443b0       a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e                                                                             25 seconds ago       Exited              patch                                    1                   3c902afb1e5d0       gcp-auth-certs-patch-56f7p                  gcp-auth
	ab18423c7de6e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   25 seconds ago       Exited              create                                   0                   5721f8e322bbc       gcp-auth-certs-create-cndgv                 gcp-auth
	24cd917601d91       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   26 seconds ago       Running             csi-external-health-monitor-controller   0                   8f747c84631fc       csi-hostpathplugin-h5gm6                    kube-system
	3693e2f08cab4       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      26 seconds ago       Running             volume-snapshot-controller               0                   05169ea7fc7d8       snapshot-controller-7d9fbc56b8-nlxtw        kube-system
	029883bbbb102       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   27 seconds ago       Exited              create                                   0                   ade534ba05c56       ingress-nginx-admission-create-nc25l        ingress-nginx
	d28f0bff28d4a       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     27 seconds ago       Running             nvidia-device-plugin-ctr                 0                   1f762fd8e0b39       nvidia-device-plugin-daemonset-qvjjb        kube-system
	355df816d72de       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              30 seconds ago       Running             csi-resizer                              0                   419f7cb7adfe3       csi-hostpath-resizer-0                      kube-system
	5378136ec2be9       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     31 seconds ago       Running             amd-gpu-device-plugin                    0                   188812b9b6de3       amd-gpu-device-plugin-t98v8                 kube-system
	448e73e6cab52       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      32 seconds ago       Running             volume-snapshot-controller               0                   75699320827f4       snapshot-controller-7d9fbc56b8-ngq92        kube-system
	f6b724bf055e8       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             33 seconds ago       Running             csi-attacher                             0                   6a77b2e1ceac3       csi-hostpath-attacher-0                     kube-system
	24bb69257a2c0       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             34 seconds ago       Running             local-path-provisioner                   0                   2dba54dad8ede       local-path-provisioner-648f6765c9-m6r4p     local-path-storage
	d5470be0baf62       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               36 seconds ago       Running             minikube-ingress-dns                     0                   7bc185504490f       kube-ingress-dns-minikube                   kube-system
	db4d51a0d90e2       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               41 seconds ago       Running             cloud-spanner-emulator                   0                   2aa7ac5474885       cloud-spanner-emulator-5bdddb765-qmtwq      default
	54eea5a21a6fd       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           44 seconds ago       Running             registry                                 0                   ea6bcb7203766       registry-6b586f9694-vrszm                   kube-system
	243ad3742fa43       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              About a minute ago   Running             yakd                                     0                   1e922c1e1b767       yakd-dashboard-5ff678cb9-cvcw2              yakd-dashboard
	203522604a8b9       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        About a minute ago   Running             metrics-server                           0                   81f30cdf86e91       metrics-server-85b7d694d7-kh47q             kube-system
	31bb87c8f5b44       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             About a minute ago   Running             coredns                                  0                   a1104fa3784e4       coredns-66bc5c9577-gnk8c                    kube-system
	30de3e37db155       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago   Running             storage-provisioner                      0                   9541d036fdd0c       storage-provisioner                         kube-system
	57cc761c4f0a4       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             2 minutes ago        Running             kindnet-cni                              0                   5a3a1aa8ca5fb       kindnet-njtv5                               kube-system
	dea3cfc0d651a       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                                             2 minutes ago        Running             kube-proxy                               0                   8b181b80d08ef       kube-proxy-z8p4j                            kube-system
	d28712ec6c409       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                                             2 minutes ago        Running             kube-controller-manager                  0                   171b11909e0bb       kube-controller-manager-addons-410014       kube-system
	22824f98cf9f9       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                                             2 minutes ago        Running             kube-apiserver                           0                   1a08e4525af22       kube-apiserver-addons-410014                kube-system
	624cd53ac7dff       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                                             2 minutes ago        Running             kube-scheduler                           0                   5b6889b29eeae       kube-scheduler-addons-410014                kube-system
	6fb90e1241345       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             2 minutes ago        Running             etcd                                     0                   081274d48ae7b       etcd-addons-410014                          kube-system
	
	
	==> coredns [31bb87c8f5b440ef6f409980279d69a279ee8873f638c558c1455812526d42f1] <==
	[INFO] 10.244.0.18:33689 - 38534 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000170049s
	[INFO] 10.244.0.18:60103 - 1649 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000085871s
	[INFO] 10.244.0.18:60103 - 1344 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000100164s
	[INFO] 10.244.0.18:34033 - 18333 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000077179s
	[INFO] 10.244.0.18:34033 - 18048 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000098829s
	[INFO] 10.244.0.18:60489 - 23905 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000059421s
	[INFO] 10.244.0.18:60489 - 23614 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000082296s
	[INFO] 10.244.0.18:33488 - 60706 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000043783s
	[INFO] 10.244.0.18:33488 - 60422 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000065959s
	[INFO] 10.244.0.18:54693 - 4319 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000079918s
	[INFO] 10.244.0.18:54693 - 3845 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000127703s
	[INFO] 10.244.0.21:57220 - 11675 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000196868s
	[INFO] 10.244.0.21:33373 - 20807 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00025964s
	[INFO] 10.244.0.21:47284 - 58904 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000144676s
	[INFO] 10.244.0.21:39464 - 21917 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000194819s
	[INFO] 10.244.0.21:36243 - 28413 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.0001034s
	[INFO] 10.244.0.21:58858 - 22847 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000164007s
	[INFO] 10.244.0.21:44190 - 47432 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.0045179s
	[INFO] 10.244.0.21:50483 - 43991 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.00568251s
	[INFO] 10.244.0.21:45107 - 50207 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005875745s
	[INFO] 10.244.0.21:42313 - 35916 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.019864734s
	[INFO] 10.244.0.21:38381 - 59733 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004325543s
	[INFO] 10.244.0.21:51516 - 9918 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.006232411s
	[INFO] 10.244.0.21:49329 - 45328 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000802612s
	[INFO] 10.244.0.21:39988 - 63866 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.00229231s
	
	
	==> describe nodes <==
	Name:               addons-410014
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-410014
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300
	                    minikube.k8s.io/name=addons-410014
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T19_29_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-410014
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-410014"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 19:29:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-410014
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 19:31:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 19:31:15 +0000   Fri, 12 Dec 2025 19:29:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 19:31:15 +0000   Fri, 12 Dec 2025 19:29:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 19:31:15 +0000   Fri, 12 Dec 2025 19:29:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 19:31:15 +0000   Fri, 12 Dec 2025 19:29:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-410014
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c9831a2813dcaeb3cc17b596693b7bac
	  System UUID:                98c40f18-1184-413f-ae72-974e7ca63e13
	  Boot ID:                    50e046d9-33b0-4107-918f-f0a8d9513b10
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     cloud-spanner-emulator-5bdddb765-qmtwq       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  gadget                      gadget-pd42c                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  gcp-auth                    gcp-auth-78565c9fb4-pl6ld                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-vgkhr    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         2m1s
	  kube-system                 amd-gpu-device-plugin-t98v8                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 coredns-66bc5c9577-gnk8c                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m3s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 csi-hostpathplugin-h5gm6                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 etcd-addons-410014                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m9s
	  kube-system                 kindnet-njtv5                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m3s
	  kube-system                 kube-apiserver-addons-410014                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kube-controller-manager-addons-410014        200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-proxy-z8p4j                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-scheduler-addons-410014                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 metrics-server-85b7d694d7-kh47q              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         2m1s
	  kube-system                 nvidia-device-plugin-daemonset-qvjjb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 registry-6b586f9694-vrszm                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 registry-creds-764b6fb674-j88nd              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 registry-proxy-5lrqf                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 snapshot-controller-7d9fbc56b8-ngq92         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 snapshot-controller-7d9fbc56b8-nlxtw         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  local-path-storage          local-path-provisioner-648f6765c9-m6r4p      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-cvcw2               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     2m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 2m1s  kube-proxy       
	  Normal  Starting                 2m9s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m9s  kubelet          Node addons-410014 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m9s  kubelet          Node addons-410014 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m9s  kubelet          Node addons-410014 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m4s  node-controller  Node addons-410014 event: Registered Node addons-410014 in Controller
	  Normal  NodeReady                82s   kubelet          Node addons-410014 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec12 19:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.000894] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000998] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.083009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.356887] i8042: Warning: Keylock active
	[  +0.012321] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.478725] block sda: the capability attribute has been deprecated.
	[  +0.086327] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026088] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.855140] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [6fb90e1241345ef6183ebbe960206cf20dfe7d2b12dc3ca243063ca515dd58b1] <==
	{"level":"warn","ts":"2025-12-12T19:29:10.010468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T19:29:10.016630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T19:29:10.027348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T19:29:10.034177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T19:29:10.041798Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T19:29:10.047785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T19:29:10.053988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T19:29:10.061447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T19:29:10.068137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T19:29:10.074230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T19:29:10.080603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T19:29:10.088142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T19:29:10.094189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T19:29:10.112440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T19:29:10.115559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T19:29:10.121504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T19:29:10.127265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T19:29:10.169193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T19:29:21.141995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T19:29:21.148474Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T19:29:47.543152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T19:29:47.549779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T19:29:47.563085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T19:29:47.569282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60876","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-12T19:30:59.037515Z","caller":"traceutil/trace.go:172","msg":"trace[1763838078] transaction","detail":"{read_only:false; response_revision:1149; number_of_response:1; }","duration":"118.564408ms","start":"2025-12-12T19:30:58.918936Z","end":"2025-12-12T19:30:59.037500Z","steps":["trace[1763838078] 'process raft request'  (duration: 118.474836ms)"],"step_count":1}
	
	
	==> gcp-auth [ebb75d365f34ad5affdfbfde57294ea476ed0d5ca8eca73e9c85726aff0bf6b1] <==
	2025/12/12 19:31:04 GCP Auth Webhook started!
	2025/12/12 19:31:11 Ready to marshal response ...
	2025/12/12 19:31:11 Ready to write response ...
	2025/12/12 19:31:11 Ready to marshal response ...
	2025/12/12 19:31:11 Ready to write response ...
	2025/12/12 19:31:11 Ready to marshal response ...
	2025/12/12 19:31:11 Ready to write response ...
	
	
	==> kernel <==
	 19:31:21 up 13 min,  0 user,  load average: 1.55, 0.76, 0.30
	Linux addons-410014 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [57cc761c4f0a4a4897b6a36f7a51c07f8f9b9d7923145c3005d02d41381917ca] <==
	E1212 19:29:49.238597       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1212 19:29:49.239604       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1212 19:29:49.239651       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1212 19:29:49.317066       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1212 19:29:50.818310       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1212 19:29:50.818333       1 metrics.go:72] Registering metrics
	I1212 19:29:50.818402       1 controller.go:711] "Syncing nftables rules"
	I1212 19:29:59.238178       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 19:29:59.238222       1 main.go:301] handling current node
	I1212 19:30:09.239046       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 19:30:09.239088       1 main.go:301] handling current node
	I1212 19:30:19.236691       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 19:30:19.236727       1 main.go:301] handling current node
	I1212 19:30:29.238331       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 19:30:29.238364       1 main.go:301] handling current node
	I1212 19:30:39.236479       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 19:30:39.236512       1 main.go:301] handling current node
	I1212 19:30:49.236893       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 19:30:49.236944       1 main.go:301] handling current node
	I1212 19:30:59.237218       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 19:30:59.237250       1 main.go:301] handling current node
	I1212 19:31:09.237319       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 19:31:09.237348       1 main.go:301] handling current node
	I1212 19:31:19.238378       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 19:31:19.238419       1 main.go:301] handling current node
	
	
	==> kube-apiserver [22824f98cf9f933f80e85a004433640955a438da23b09eee368ae6a14f2c12f2] <==
	W1212 19:29:47.543078       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1212 19:29:47.549717       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1212 19:29:47.563056       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1212 19:29:47.569268       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1212 19:29:59.409378       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.21.49:443: connect: connection refused
	W1212 19:29:59.409412       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.21.49:443: connect: connection refused
	E1212 19:29:59.409419       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.21.49:443: connect: connection refused" logger="UnhandledError"
	E1212 19:29:59.409439       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.21.49:443: connect: connection refused" logger="UnhandledError"
	W1212 19:29:59.427655       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.21.49:443: connect: connection refused
	E1212 19:29:59.427693       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.21.49:443: connect: connection refused" logger="UnhandledError"
	W1212 19:29:59.431145       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.21.49:443: connect: connection refused
	E1212 19:29:59.431179       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.21.49:443: connect: connection refused" logger="UnhandledError"
	E1212 19:30:02.745436       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.22.180:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.22.180:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.22.180:443: connect: connection refused" logger="UnhandledError"
	W1212 19:30:02.745531       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 19:30:02.745594       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1212 19:30:02.745935       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.22.180:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.22.180:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.22.180:443: connect: connection refused" logger="UnhandledError"
	E1212 19:30:02.751015       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.22.180:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.22.180:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.22.180:443: connect: connection refused" logger="UnhandledError"
	E1212 19:30:02.771552       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.22.180:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.22.180:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.22.180:443: connect: connection refused" logger="UnhandledError"
	E1212 19:30:02.812811       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.22.180:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.22.180:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.22.180:443: connect: connection refused" logger="UnhandledError"
	I1212 19:30:02.920764       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1212 19:31:19.526627       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:42054: use of closed network connection
	E1212 19:31:19.666672       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:42090: use of closed network connection
	
	
	==> kube-controller-manager [d28712ec6c40914cb6c8e08632c1ae630a5201547999a8288f0c04c8dd6cee58] <==
	I1212 19:29:17.526187       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1212 19:29:17.526253       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1212 19:29:17.526298       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1212 19:29:17.526443       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1212 19:29:17.526532       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1212 19:29:17.526632       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1212 19:29:17.526773       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1212 19:29:17.526779       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1212 19:29:17.526826       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1212 19:29:17.526900       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1212 19:29:17.526913       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1212 19:29:17.526927       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1212 19:29:17.527976       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1212 19:29:17.532171       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1212 19:29:17.534365       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 19:29:17.544505       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1212 19:29:17.548685       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1212 19:29:47.538233       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 19:29:47.538386       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1212 19:29:47.538429       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1212 19:29:47.554951       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1212 19:29:47.558446       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1212 19:29:47.638849       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 19:29:47.659207       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1212 19:30:02.481537       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [dea3cfc0d651ab9de775ab172dfcbe74be7ff2c5ac3aa39aa3a32c2ca09c508a] <==
	I1212 19:29:19.002429       1 server_linux.go:53] "Using iptables proxy"
	I1212 19:29:19.073424       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1212 19:29:19.174545       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1212 19:29:19.174584       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1212 19:29:19.174673       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 19:29:19.393193       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 19:29:19.393377       1 server_linux.go:132] "Using iptables Proxier"
	I1212 19:29:19.515032       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 19:29:19.521601       1 server.go:527] "Version info" version="v1.34.2"
	I1212 19:29:19.521639       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 19:29:19.552738       1 config.go:200] "Starting service config controller"
	I1212 19:29:19.552761       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 19:29:19.552787       1 config.go:106] "Starting endpoint slice config controller"
	I1212 19:29:19.552792       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 19:29:19.552805       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 19:29:19.552809       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 19:29:19.553134       1 config.go:309] "Starting node config controller"
	I1212 19:29:19.553160       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 19:29:19.660852       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1212 19:29:19.660896       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1212 19:29:19.660930       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1212 19:29:19.672665       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [624cd53ac7dff424e3ce412fbf38f28bdb8f59a04c4a399ca5d5f0b1ec4ee746] <==
	E1212 19:29:10.556829       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1212 19:29:10.556943       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1212 19:29:10.556964       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1212 19:29:10.557082       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1212 19:29:10.557121       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1212 19:29:10.558646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1212 19:29:10.558663       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1212 19:29:10.558773       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1212 19:29:10.558807       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1212 19:29:10.558890       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1212 19:29:10.558909       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1212 19:29:10.558999       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1212 19:29:10.559111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1212 19:29:10.559131       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1212 19:29:10.558411       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1212 19:29:10.559357       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1212 19:29:10.559845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1212 19:29:11.386879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1212 19:29:11.432796       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1212 19:29:11.442608       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1212 19:29:11.653799       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1212 19:29:11.689757       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1212 19:29:11.690364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1212 19:29:11.744503       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1212 19:29:13.152621       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 12 19:30:58 addons-410014 kubelet[1276]: I1212 19:30:58.001733    1276 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4w596\" (UniqueName: \"kubernetes.io/projected/ba52d7ba-6cdf-4708-b022-f352784d7a34-kube-api-access-4w596\") pod \"ba52d7ba-6cdf-4708-b022-f352784d7a34\" (UID: \"ba52d7ba-6cdf-4708-b022-f352784d7a34\") "
	Dec 12 19:30:58 addons-410014 kubelet[1276]: I1212 19:30:58.004131    1276 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d98335c5-84d0-48a8-9447-f5867fb2a3c1-kube-api-access-9kqt7" (OuterVolumeSpecName: "kube-api-access-9kqt7") pod "d98335c5-84d0-48a8-9447-f5867fb2a3c1" (UID: "d98335c5-84d0-48a8-9447-f5867fb2a3c1"). InnerVolumeSpecName "kube-api-access-9kqt7". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 12 19:30:58 addons-410014 kubelet[1276]: I1212 19:30:58.004221    1276 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba52d7ba-6cdf-4708-b022-f352784d7a34-kube-api-access-4w596" (OuterVolumeSpecName: "kube-api-access-4w596") pod "ba52d7ba-6cdf-4708-b022-f352784d7a34" (UID: "ba52d7ba-6cdf-4708-b022-f352784d7a34"). InnerVolumeSpecName "kube-api-access-4w596". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 12 19:30:58 addons-410014 kubelet[1276]: I1212 19:30:58.103020    1276 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9kqt7\" (UniqueName: \"kubernetes.io/projected/d98335c5-84d0-48a8-9447-f5867fb2a3c1-kube-api-access-9kqt7\") on node \"addons-410014\" DevicePath \"\""
	Dec 12 19:30:58 addons-410014 kubelet[1276]: I1212 19:30:58.103060    1276 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4w596\" (UniqueName: \"kubernetes.io/projected/ba52d7ba-6cdf-4708-b022-f352784d7a34-kube-api-access-4w596\") on node \"addons-410014\" DevicePath \"\""
	Dec 12 19:30:58 addons-410014 kubelet[1276]: I1212 19:30:58.907615    1276 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5721f8e322bbc05b50757211a0cab78ebfee2de307068fcd4c3cfc84acf5c038"
	Dec 12 19:30:58 addons-410014 kubelet[1276]: I1212 19:30:58.909130    1276 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c902afb1e5d0cb82987b6cd3ec1541f3cdc313f93ef9db5424e4584bd2b905e"
	Dec 12 19:30:58 addons-410014 kubelet[1276]: I1212 19:30:58.909450    1276 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-5lrqf" secret="" err="secret \"gcp-auth\" not found"
	Dec 12 19:30:59 addons-410014 kubelet[1276]: I1212 19:30:59.928518    1276 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-pd42c" podStartSLOduration=64.756450542 podStartE2EDuration="1m39.928501519s" podCreationTimestamp="2025-12-12 19:29:20 +0000 UTC" firstStartedPulling="2025-12-12 19:30:24.217666937 +0000 UTC m=+71.708004141" lastFinishedPulling="2025-12-12 19:30:59.389717911 +0000 UTC m=+106.880055118" observedRunningTime="2025-12-12 19:30:59.928372955 +0000 UTC m=+107.418710180" watchObservedRunningTime="2025-12-12 19:30:59.928501519 +0000 UTC m=+107.418838745"
	Dec 12 19:31:01 addons-410014 kubelet[1276]: I1212 19:31:01.665883    1276 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Dec 12 19:31:01 addons-410014 kubelet[1276]: I1212 19:31:01.665935    1276 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Dec 12 19:31:03 addons-410014 kubelet[1276]: E1212 19:31:03.241690    1276 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Dec 12 19:31:03 addons-410014 kubelet[1276]: E1212 19:31:03.241786    1276 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b8f037f-1ba2-44dc-90c5-1575e1dc8c8b-gcr-creds podName:8b8f037f-1ba2-44dc-90c5-1575e1dc8c8b nodeName:}" failed. No retries permitted until 2025-12-12 19:32:07.241767685 +0000 UTC m=+174.732104901 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/8b8f037f-1ba2-44dc-90c5-1575e1dc8c8b-gcr-creds") pod "registry-creds-764b6fb674-j88nd" (UID: "8b8f037f-1ba2-44dc-90c5-1575e1dc8c8b") : secret "registry-creds-gcr" not found
	Dec 12 19:31:03 addons-410014 kubelet[1276]: I1212 19:31:03.958102    1276 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-h5gm6" podStartSLOduration=1.048703117 podStartE2EDuration="1m4.958088019s" podCreationTimestamp="2025-12-12 19:29:59 +0000 UTC" firstStartedPulling="2025-12-12 19:29:59.84290571 +0000 UTC m=+47.333242932" lastFinishedPulling="2025-12-12 19:31:03.752290624 +0000 UTC m=+111.242627834" observedRunningTime="2025-12-12 19:31:03.957049792 +0000 UTC m=+111.447387018" watchObservedRunningTime="2025-12-12 19:31:03.958088019 +0000 UTC m=+111.448425243"
	Dec 12 19:31:04 addons-410014 kubelet[1276]: I1212 19:31:04.960592    1276 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-pl6ld" podStartSLOduration=96.704282511 podStartE2EDuration="1m37.960571878s" podCreationTimestamp="2025-12-12 19:29:27 +0000 UTC" firstStartedPulling="2025-12-12 19:31:03.602257284 +0000 UTC m=+111.092594490" lastFinishedPulling="2025-12-12 19:31:04.858546634 +0000 UTC m=+112.348883857" observedRunningTime="2025-12-12 19:31:04.958165628 +0000 UTC m=+112.448502854" watchObservedRunningTime="2025-12-12 19:31:04.960571878 +0000 UTC m=+112.450909103"
	Dec 12 19:31:07 addons-410014 kubelet[1276]: I1212 19:31:07.597384    1276 scope.go:117] "RemoveContainer" containerID="7e0b8c3511a0e4061bc6d1515b7478292f91e6a0dc1f2e4c8bf5df38611c7ce5"
	Dec 12 19:31:08 addons-410014 kubelet[1276]: I1212 19:31:08.969606    1276 scope.go:117] "RemoveContainer" containerID="7e0b8c3511a0e4061bc6d1515b7478292f91e6a0dc1f2e4c8bf5df38611c7ce5"
	Dec 12 19:31:08 addons-410014 kubelet[1276]: I1212 19:31:08.990024    1276 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-85d4c799dd-vgkhr" podStartSLOduration=103.962759725 podStartE2EDuration="1m48.990007787s" podCreationTimestamp="2025-12-12 19:29:20 +0000 UTC" firstStartedPulling="2025-12-12 19:31:03.638404472 +0000 UTC m=+111.128741678" lastFinishedPulling="2025-12-12 19:31:08.66565253 +0000 UTC m=+116.155989740" observedRunningTime="2025-12-12 19:31:08.979240591 +0000 UTC m=+116.469577816" watchObservedRunningTime="2025-12-12 19:31:08.990007787 +0000 UTC m=+116.480345012"
	Dec 12 19:31:10 addons-410014 kubelet[1276]: I1212 19:31:10.095024    1276 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dp9t4\" (UniqueName: \"kubernetes.io/projected/d1dd90d4-b250-45c2-9e00-9c0d08b896f7-kube-api-access-dp9t4\") pod \"d1dd90d4-b250-45c2-9e00-9c0d08b896f7\" (UID: \"d1dd90d4-b250-45c2-9e00-9c0d08b896f7\") "
	Dec 12 19:31:10 addons-410014 kubelet[1276]: I1212 19:31:10.097340    1276 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1dd90d4-b250-45c2-9e00-9c0d08b896f7-kube-api-access-dp9t4" (OuterVolumeSpecName: "kube-api-access-dp9t4") pod "d1dd90d4-b250-45c2-9e00-9c0d08b896f7" (UID: "d1dd90d4-b250-45c2-9e00-9c0d08b896f7"). InnerVolumeSpecName "kube-api-access-dp9t4". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 12 19:31:10 addons-410014 kubelet[1276]: I1212 19:31:10.195958    1276 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dp9t4\" (UniqueName: \"kubernetes.io/projected/d1dd90d4-b250-45c2-9e00-9c0d08b896f7-kube-api-access-dp9t4\") on node \"addons-410014\" DevicePath \"\""
	Dec 12 19:31:10 addons-410014 kubelet[1276]: I1212 19:31:10.981950    1276 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53bcc506bb00cf265bf963b8b1da289b178c1cfca286b6c12c06756768c21eac"
	Dec 12 19:31:11 addons-410014 kubelet[1276]: I1212 19:31:11.503094    1276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll6rq\" (UniqueName: \"kubernetes.io/projected/616944ae-2125-4437-bf51-6aa3067feb79-kube-api-access-ll6rq\") pod \"busybox\" (UID: \"616944ae-2125-4437-bf51-6aa3067feb79\") " pod="default/busybox"
	Dec 12 19:31:11 addons-410014 kubelet[1276]: I1212 19:31:11.503155    1276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/616944ae-2125-4437-bf51-6aa3067feb79-gcp-creds\") pod \"busybox\" (UID: \"616944ae-2125-4437-bf51-6aa3067feb79\") " pod="default/busybox"
	Dec 12 19:31:13 addons-410014 kubelet[1276]: I1212 19:31:13.002150    1276 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.380124559 podStartE2EDuration="2.002133538s" podCreationTimestamp="2025-12-12 19:31:11 +0000 UTC" firstStartedPulling="2025-12-12 19:31:11.717018165 +0000 UTC m=+119.207355383" lastFinishedPulling="2025-12-12 19:31:12.339027156 +0000 UTC m=+119.829364362" observedRunningTime="2025-12-12 19:31:13.001947761 +0000 UTC m=+120.492284987" watchObservedRunningTime="2025-12-12 19:31:13.002133538 +0000 UTC m=+120.492470764"
	
	
	==> storage-provisioner [30de3e37db155bdf4e8d95d19b3266cd29f007ba5c66fdf65486edcc3d9fd8f3] <==
	W1212 19:30:56.079860       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:30:58.082518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:30:58.088183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:31:00.090968       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:31:00.094125       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:31:02.096651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:31:02.100414       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:31:04.103417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:31:04.108860       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:31:06.112108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:31:06.116374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:31:08.119879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:31:08.124061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:31:10.126825       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:31:10.130636       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:31:12.132970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:31:12.136283       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:31:14.138912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:31:14.145674       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:31:16.148328       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:31:16.152924       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:31:18.155168       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:31:18.158361       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:31:20.161517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:31:20.165304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-410014 -n addons-410014
helpers_test.go:270: (dbg) Run:  kubectl --context addons-410014 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: gcp-auth-certs-create-cndgv gcp-auth-certs-patch-56f7p ingress-nginx-admission-create-nc25l ingress-nginx-admission-patch-k6zbn registry-creds-764b6fb674-j88nd
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-410014 describe pod gcp-auth-certs-create-cndgv gcp-auth-certs-patch-56f7p ingress-nginx-admission-create-nc25l ingress-nginx-admission-patch-k6zbn registry-creds-764b6fb674-j88nd
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-410014 describe pod gcp-auth-certs-create-cndgv gcp-auth-certs-patch-56f7p ingress-nginx-admission-create-nc25l ingress-nginx-admission-patch-k6zbn registry-creds-764b6fb674-j88nd: exit status 1 (66.010344ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "gcp-auth-certs-create-cndgv" not found
	Error from server (NotFound): pods "gcp-auth-certs-patch-56f7p" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-nc25l" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-k6zbn" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-j88nd" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-410014 describe pod gcp-auth-certs-create-cndgv gcp-auth-certs-patch-56f7p ingress-nginx-admission-create-nc25l ingress-nginx-admission-patch-k6zbn registry-creds-764b6fb674-j88nd: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-410014 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-410014 addons disable headlamp --alsologtostderr -v=1: exit status 11 (232.992321ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 19:31:22.139724   20117 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:31:22.140124   20117 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:31:22.140138   20117 out.go:374] Setting ErrFile to fd 2...
	I1212 19:31:22.140146   20117 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:31:22.140673   20117 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 19:31:22.141201   20117 mustload.go:66] Loading cluster: addons-410014
	I1212 19:31:22.141588   20117 config.go:182] Loaded profile config "addons-410014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:31:22.141609   20117 addons.go:622] checking whether the cluster is paused
	I1212 19:31:22.141691   20117 config.go:182] Loaded profile config "addons-410014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:31:22.141703   20117 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:31:22.142058   20117 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:31:22.159340   20117 ssh_runner.go:195] Run: systemctl --version
	I1212 19:31:22.159377   20117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:31:22.176474   20117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:31:22.269185   20117 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 19:31:22.269261   20117 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 19:31:22.296847   20117 cri.go:89] found id: "76571c6136b47298f32a0b1ec18cc99af7b389e2b5dbe495f1629d5e2703e8de"
	I1212 19:31:22.296869   20117 cri.go:89] found id: "7f1863e417224e030ce9a9d2e2f7136cbd19e03c826aa5403bad131c91dc9b93"
	I1212 19:31:22.296874   20117 cri.go:89] found id: "5cd7aec5d9bbe817f190772e730768355d87901e7fc80e8a00143cfda71f1d24"
	I1212 19:31:22.296879   20117 cri.go:89] found id: "9d3792f63458445175e10a14dca33e6972f1836499b2b34aec58251c1aae25fd"
	I1212 19:31:22.296885   20117 cri.go:89] found id: "e9263571afd91c7084801115b897309fd4e916d9659237639dc9fac1f242fef2"
	I1212 19:31:22.296890   20117 cri.go:89] found id: "cb6005d68d9a96016741ff117661915a95e429ed8aef7fc5832c7b9335aaf7c7"
	I1212 19:31:22.296895   20117 cri.go:89] found id: "24cd917601d91959bf6581b8065f18f069164f1f6db8c6dada400521a951cfd4"
	I1212 19:31:22.296900   20117 cri.go:89] found id: "3693e2f08cab47eb8561d548d6df8eed556b0c840ae77214117a4018d1ef6385"
	I1212 19:31:22.296905   20117 cri.go:89] found id: "d28f0bff28d4accb708ad0b723f3d8b457fae7a4905c32e5e61779549873d5fc"
	I1212 19:31:22.296912   20117 cri.go:89] found id: "355df816d72de65cb71104e428e2341e612b893f1f784bd69a95e8b4d25b2f15"
	I1212 19:31:22.296917   20117 cri.go:89] found id: "5378136ec2be93b30b5623db6080933a844f5c1ee7ab34812921d4ca12a2ec39"
	I1212 19:31:22.296928   20117 cri.go:89] found id: "448e73e6cab5275217215830ac10a0c4029a9e407db13c1c6352947a6c77b9f5"
	I1212 19:31:22.296932   20117 cri.go:89] found id: "f6b724bf055e80101eea67abe6cf1548be806ad8088ad9e482367bfb38836269"
	I1212 19:31:22.296940   20117 cri.go:89] found id: "d5470be0baf62349e9e352868eba4d0011d02e21667dbb5d805f2985be00ccdc"
	I1212 19:31:22.296944   20117 cri.go:89] found id: "54eea5a21a6fd70fe1e9d45f9a1a45437a77811fe92301178f0b99e163b4c669"
	I1212 19:31:22.296951   20117 cri.go:89] found id: "203522604a8b916bc3a950e033ca9bb281723c416d6c4583c96dbe77e514160d"
	I1212 19:31:22.296957   20117 cri.go:89] found id: "31bb87c8f5b440ef6f409980279d69a279ee8873f638c558c1455812526d42f1"
	I1212 19:31:22.296961   20117 cri.go:89] found id: "30de3e37db155bdf4e8d95d19b3266cd29f007ba5c66fdf65486edcc3d9fd8f3"
	I1212 19:31:22.296964   20117 cri.go:89] found id: "57cc761c4f0a4a4897b6a36f7a51c07f8f9b9d7923145c3005d02d41381917ca"
	I1212 19:31:22.296967   20117 cri.go:89] found id: "dea3cfc0d651ab9de775ab172dfcbe74be7ff2c5ac3aa39aa3a32c2ca09c508a"
	I1212 19:31:22.296972   20117 cri.go:89] found id: "d28712ec6c40914cb6c8e08632c1ae630a5201547999a8288f0c04c8dd6cee58"
	I1212 19:31:22.296978   20117 cri.go:89] found id: "22824f98cf9f933f80e85a004433640955a438da23b09eee368ae6a14f2c12f2"
	I1212 19:31:22.296981   20117 cri.go:89] found id: "624cd53ac7dff424e3ce412fbf38f28bdb8f59a04c4a399ca5d5f0b1ec4ee746"
	I1212 19:31:22.296984   20117 cri.go:89] found id: "6fb90e1241345ef6183ebbe960206cf20dfe7d2b12dc3ca243063ca515dd58b1"
	I1212 19:31:22.296987   20117 cri.go:89] found id: ""
	I1212 19:31:22.297021   20117 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 19:31:22.310073   20117 out.go:203] 
	W1212 19:31:22.311303   20117 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T19:31:22Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T19:31:22Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 19:31:22.311322   20117 out.go:285] * 
	* 
	W1212 19:31:22.314160   20117 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 19:31:22.315205   20117 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-410014 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.42s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-qmtwq" [5cb867ff-e616-48c7-82cd-9d1d363da633] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004845177s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-410014 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-410014 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (264.275537ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 19:31:44.811123   22235 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:31:44.811422   22235 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:31:44.811432   22235 out.go:374] Setting ErrFile to fd 2...
	I1212 19:31:44.811436   22235 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:31:44.811635   22235 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 19:31:44.811875   22235 mustload.go:66] Loading cluster: addons-410014
	I1212 19:31:44.812146   22235 config.go:182] Loaded profile config "addons-410014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:31:44.812163   22235 addons.go:622] checking whether the cluster is paused
	I1212 19:31:44.812241   22235 config.go:182] Loaded profile config "addons-410014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:31:44.812253   22235 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:31:44.812631   22235 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:31:44.832024   22235 ssh_runner.go:195] Run: systemctl --version
	I1212 19:31:44.832079   22235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:31:44.851467   22235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:31:44.950733   22235 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 19:31:44.950815   22235 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 19:31:44.985329   22235 cri.go:89] found id: "76571c6136b47298f32a0b1ec18cc99af7b389e2b5dbe495f1629d5e2703e8de"
	I1212 19:31:44.985360   22235 cri.go:89] found id: "7f1863e417224e030ce9a9d2e2f7136cbd19e03c826aa5403bad131c91dc9b93"
	I1212 19:31:44.985364   22235 cri.go:89] found id: "5cd7aec5d9bbe817f190772e730768355d87901e7fc80e8a00143cfda71f1d24"
	I1212 19:31:44.985368   22235 cri.go:89] found id: "9d3792f63458445175e10a14dca33e6972f1836499b2b34aec58251c1aae25fd"
	I1212 19:31:44.985373   22235 cri.go:89] found id: "e9263571afd91c7084801115b897309fd4e916d9659237639dc9fac1f242fef2"
	I1212 19:31:44.985381   22235 cri.go:89] found id: "cb6005d68d9a96016741ff117661915a95e429ed8aef7fc5832c7b9335aaf7c7"
	I1212 19:31:44.985383   22235 cri.go:89] found id: "24cd917601d91959bf6581b8065f18f069164f1f6db8c6dada400521a951cfd4"
	I1212 19:31:44.985386   22235 cri.go:89] found id: "3693e2f08cab47eb8561d548d6df8eed556b0c840ae77214117a4018d1ef6385"
	I1212 19:31:44.985389   22235 cri.go:89] found id: "d28f0bff28d4accb708ad0b723f3d8b457fae7a4905c32e5e61779549873d5fc"
	I1212 19:31:44.985397   22235 cri.go:89] found id: "355df816d72de65cb71104e428e2341e612b893f1f784bd69a95e8b4d25b2f15"
	I1212 19:31:44.985400   22235 cri.go:89] found id: "5378136ec2be93b30b5623db6080933a844f5c1ee7ab34812921d4ca12a2ec39"
	I1212 19:31:44.985403   22235 cri.go:89] found id: "448e73e6cab5275217215830ac10a0c4029a9e407db13c1c6352947a6c77b9f5"
	I1212 19:31:44.985406   22235 cri.go:89] found id: "f6b724bf055e80101eea67abe6cf1548be806ad8088ad9e482367bfb38836269"
	I1212 19:31:44.985408   22235 cri.go:89] found id: "d5470be0baf62349e9e352868eba4d0011d02e21667dbb5d805f2985be00ccdc"
	I1212 19:31:44.985411   22235 cri.go:89] found id: "54eea5a21a6fd70fe1e9d45f9a1a45437a77811fe92301178f0b99e163b4c669"
	I1212 19:31:44.985416   22235 cri.go:89] found id: "203522604a8b916bc3a950e033ca9bb281723c416d6c4583c96dbe77e514160d"
	I1212 19:31:44.985418   22235 cri.go:89] found id: "31bb87c8f5b440ef6f409980279d69a279ee8873f638c558c1455812526d42f1"
	I1212 19:31:44.985422   22235 cri.go:89] found id: "30de3e37db155bdf4e8d95d19b3266cd29f007ba5c66fdf65486edcc3d9fd8f3"
	I1212 19:31:44.985425   22235 cri.go:89] found id: "57cc761c4f0a4a4897b6a36f7a51c07f8f9b9d7923145c3005d02d41381917ca"
	I1212 19:31:44.985428   22235 cri.go:89] found id: "dea3cfc0d651ab9de775ab172dfcbe74be7ff2c5ac3aa39aa3a32c2ca09c508a"
	I1212 19:31:44.985431   22235 cri.go:89] found id: "d28712ec6c40914cb6c8e08632c1ae630a5201547999a8288f0c04c8dd6cee58"
	I1212 19:31:44.985433   22235 cri.go:89] found id: "22824f98cf9f933f80e85a004433640955a438da23b09eee368ae6a14f2c12f2"
	I1212 19:31:44.985436   22235 cri.go:89] found id: "624cd53ac7dff424e3ce412fbf38f28bdb8f59a04c4a399ca5d5f0b1ec4ee746"
	I1212 19:31:44.985439   22235 cri.go:89] found id: "6fb90e1241345ef6183ebbe960206cf20dfe7d2b12dc3ca243063ca515dd58b1"
	I1212 19:31:44.985442   22235 cri.go:89] found id: ""
	I1212 19:31:44.985477   22235 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 19:31:45.000169   22235 out.go:203] 
	W1212 19:31:45.001305   22235 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T19:31:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T19:31:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 19:31:45.001329   22235 out.go:285] * 
	* 
	W1212 19:31:45.004326   22235 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 19:31:45.006764   22235 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-410014 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (6.27s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.08s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-410014 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-410014 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410014 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410014 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410014 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410014 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410014 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [805028b5-3673-439c-ab74-be1e226bc125] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [805028b5-3673-439c-ab74-be1e226bc125] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [805028b5-3673-439c-ab74-be1e226bc125] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003192912s
addons_test.go:969: (dbg) Run:  kubectl --context addons-410014 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-410014 ssh "cat /opt/local-path-provisioner/pvc-e79eb42e-1321-4a09-9867-49823fdf7fbb_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-410014 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-410014 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-410014 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-410014 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (253.831788ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 19:31:44.996680   22291 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:31:44.996845   22291 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:31:44.996852   22291 out.go:374] Setting ErrFile to fd 2...
	I1212 19:31:44.996859   22291 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:31:44.997098   22291 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 19:31:44.997412   22291 mustload.go:66] Loading cluster: addons-410014
	I1212 19:31:44.997748   22291 config.go:182] Loaded profile config "addons-410014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:31:44.997772   22291 addons.go:622] checking whether the cluster is paused
	I1212 19:31:44.997868   22291 config.go:182] Loaded profile config "addons-410014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:31:44.997883   22291 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:31:44.998369   22291 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:31:45.018459   22291 ssh_runner.go:195] Run: systemctl --version
	I1212 19:31:45.018530   22291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:31:45.036771   22291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:31:45.135688   22291 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 19:31:45.135764   22291 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 19:31:45.164377   22291 cri.go:89] found id: "76571c6136b47298f32a0b1ec18cc99af7b389e2b5dbe495f1629d5e2703e8de"
	I1212 19:31:45.164394   22291 cri.go:89] found id: "7f1863e417224e030ce9a9d2e2f7136cbd19e03c826aa5403bad131c91dc9b93"
	I1212 19:31:45.164398   22291 cri.go:89] found id: "5cd7aec5d9bbe817f190772e730768355d87901e7fc80e8a00143cfda71f1d24"
	I1212 19:31:45.164401   22291 cri.go:89] found id: "9d3792f63458445175e10a14dca33e6972f1836499b2b34aec58251c1aae25fd"
	I1212 19:31:45.164404   22291 cri.go:89] found id: "e9263571afd91c7084801115b897309fd4e916d9659237639dc9fac1f242fef2"
	I1212 19:31:45.164410   22291 cri.go:89] found id: "cb6005d68d9a96016741ff117661915a95e429ed8aef7fc5832c7b9335aaf7c7"
	I1212 19:31:45.164423   22291 cri.go:89] found id: "24cd917601d91959bf6581b8065f18f069164f1f6db8c6dada400521a951cfd4"
	I1212 19:31:45.164428   22291 cri.go:89] found id: "3693e2f08cab47eb8561d548d6df8eed556b0c840ae77214117a4018d1ef6385"
	I1212 19:31:45.164437   22291 cri.go:89] found id: "d28f0bff28d4accb708ad0b723f3d8b457fae7a4905c32e5e61779549873d5fc"
	I1212 19:31:45.164448   22291 cri.go:89] found id: "355df816d72de65cb71104e428e2341e612b893f1f784bd69a95e8b4d25b2f15"
	I1212 19:31:45.164456   22291 cri.go:89] found id: "5378136ec2be93b30b5623db6080933a844f5c1ee7ab34812921d4ca12a2ec39"
	I1212 19:31:45.164460   22291 cri.go:89] found id: "448e73e6cab5275217215830ac10a0c4029a9e407db13c1c6352947a6c77b9f5"
	I1212 19:31:45.164467   22291 cri.go:89] found id: "f6b724bf055e80101eea67abe6cf1548be806ad8088ad9e482367bfb38836269"
	I1212 19:31:45.164470   22291 cri.go:89] found id: "d5470be0baf62349e9e352868eba4d0011d02e21667dbb5d805f2985be00ccdc"
	I1212 19:31:45.164473   22291 cri.go:89] found id: "54eea5a21a6fd70fe1e9d45f9a1a45437a77811fe92301178f0b99e163b4c669"
	I1212 19:31:45.164485   22291 cri.go:89] found id: "203522604a8b916bc3a950e033ca9bb281723c416d6c4583c96dbe77e514160d"
	I1212 19:31:45.164492   22291 cri.go:89] found id: "31bb87c8f5b440ef6f409980279d69a279ee8873f638c558c1455812526d42f1"
	I1212 19:31:45.164496   22291 cri.go:89] found id: "30de3e37db155bdf4e8d95d19b3266cd29f007ba5c66fdf65486edcc3d9fd8f3"
	I1212 19:31:45.164499   22291 cri.go:89] found id: "57cc761c4f0a4a4897b6a36f7a51c07f8f9b9d7923145c3005d02d41381917ca"
	I1212 19:31:45.164502   22291 cri.go:89] found id: "dea3cfc0d651ab9de775ab172dfcbe74be7ff2c5ac3aa39aa3a32c2ca09c508a"
	I1212 19:31:45.164507   22291 cri.go:89] found id: "d28712ec6c40914cb6c8e08632c1ae630a5201547999a8288f0c04c8dd6cee58"
	I1212 19:31:45.164510   22291 cri.go:89] found id: "22824f98cf9f933f80e85a004433640955a438da23b09eee368ae6a14f2c12f2"
	I1212 19:31:45.164518   22291 cri.go:89] found id: "624cd53ac7dff424e3ce412fbf38f28bdb8f59a04c4a399ca5d5f0b1ec4ee746"
	I1212 19:31:45.164523   22291 cri.go:89] found id: "6fb90e1241345ef6183ebbe960206cf20dfe7d2b12dc3ca243063ca515dd58b1"
	I1212 19:31:45.164531   22291 cri.go:89] found id: ""
	I1212 19:31:45.164580   22291 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 19:31:45.179645   22291 out.go:203] 
	W1212 19:31:45.181471   22291 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T19:31:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T19:31:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 19:31:45.181494   22291 out.go:285] * 
	* 
	W1212 19:31:45.184431   22291 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 19:31:45.185710   22291 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-410014 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.08s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-qvjjb" [a38c714e-a797-40a2-8341-89a74eaf184e] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.002660706s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-410014 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-410014 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (233.06916ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 19:31:30.684802   21147 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:31:30.685110   21147 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:31:30.685120   21147 out.go:374] Setting ErrFile to fd 2...
	I1212 19:31:30.685126   21147 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:31:30.685309   21147 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 19:31:30.685557   21147 mustload.go:66] Loading cluster: addons-410014
	I1212 19:31:30.685852   21147 config.go:182] Loaded profile config "addons-410014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:31:30.685875   21147 addons.go:622] checking whether the cluster is paused
	I1212 19:31:30.685974   21147 config.go:182] Loaded profile config "addons-410014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:31:30.685990   21147 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:31:30.686355   21147 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:31:30.702705   21147 ssh_runner.go:195] Run: systemctl --version
	I1212 19:31:30.702752   21147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:31:30.720013   21147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:31:30.812133   21147 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 19:31:30.812197   21147 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 19:31:30.843080   21147 cri.go:89] found id: "76571c6136b47298f32a0b1ec18cc99af7b389e2b5dbe495f1629d5e2703e8de"
	I1212 19:31:30.843101   21147 cri.go:89] found id: "7f1863e417224e030ce9a9d2e2f7136cbd19e03c826aa5403bad131c91dc9b93"
	I1212 19:31:30.843107   21147 cri.go:89] found id: "5cd7aec5d9bbe817f190772e730768355d87901e7fc80e8a00143cfda71f1d24"
	I1212 19:31:30.843111   21147 cri.go:89] found id: "9d3792f63458445175e10a14dca33e6972f1836499b2b34aec58251c1aae25fd"
	I1212 19:31:30.843127   21147 cri.go:89] found id: "e9263571afd91c7084801115b897309fd4e916d9659237639dc9fac1f242fef2"
	I1212 19:31:30.843131   21147 cri.go:89] found id: "cb6005d68d9a96016741ff117661915a95e429ed8aef7fc5832c7b9335aaf7c7"
	I1212 19:31:30.843135   21147 cri.go:89] found id: "24cd917601d91959bf6581b8065f18f069164f1f6db8c6dada400521a951cfd4"
	I1212 19:31:30.843140   21147 cri.go:89] found id: "3693e2f08cab47eb8561d548d6df8eed556b0c840ae77214117a4018d1ef6385"
	I1212 19:31:30.843149   21147 cri.go:89] found id: "d28f0bff28d4accb708ad0b723f3d8b457fae7a4905c32e5e61779549873d5fc"
	I1212 19:31:30.843157   21147 cri.go:89] found id: "355df816d72de65cb71104e428e2341e612b893f1f784bd69a95e8b4d25b2f15"
	I1212 19:31:30.843166   21147 cri.go:89] found id: "5378136ec2be93b30b5623db6080933a844f5c1ee7ab34812921d4ca12a2ec39"
	I1212 19:31:30.843170   21147 cri.go:89] found id: "448e73e6cab5275217215830ac10a0c4029a9e407db13c1c6352947a6c77b9f5"
	I1212 19:31:30.843178   21147 cri.go:89] found id: "f6b724bf055e80101eea67abe6cf1548be806ad8088ad9e482367bfb38836269"
	I1212 19:31:30.843181   21147 cri.go:89] found id: "d5470be0baf62349e9e352868eba4d0011d02e21667dbb5d805f2985be00ccdc"
	I1212 19:31:30.843187   21147 cri.go:89] found id: "54eea5a21a6fd70fe1e9d45f9a1a45437a77811fe92301178f0b99e163b4c669"
	I1212 19:31:30.843191   21147 cri.go:89] found id: "203522604a8b916bc3a950e033ca9bb281723c416d6c4583c96dbe77e514160d"
	I1212 19:31:30.843196   21147 cri.go:89] found id: "31bb87c8f5b440ef6f409980279d69a279ee8873f638c558c1455812526d42f1"
	I1212 19:31:30.843200   21147 cri.go:89] found id: "30de3e37db155bdf4e8d95d19b3266cd29f007ba5c66fdf65486edcc3d9fd8f3"
	I1212 19:31:30.843203   21147 cri.go:89] found id: "57cc761c4f0a4a4897b6a36f7a51c07f8f9b9d7923145c3005d02d41381917ca"
	I1212 19:31:30.843205   21147 cri.go:89] found id: "dea3cfc0d651ab9de775ab172dfcbe74be7ff2c5ac3aa39aa3a32c2ca09c508a"
	I1212 19:31:30.843208   21147 cri.go:89] found id: "d28712ec6c40914cb6c8e08632c1ae630a5201547999a8288f0c04c8dd6cee58"
	I1212 19:31:30.843210   21147 cri.go:89] found id: "22824f98cf9f933f80e85a004433640955a438da23b09eee368ae6a14f2c12f2"
	I1212 19:31:30.843217   21147 cri.go:89] found id: "624cd53ac7dff424e3ce412fbf38f28bdb8f59a04c4a399ca5d5f0b1ec4ee746"
	I1212 19:31:30.843220   21147 cri.go:89] found id: "6fb90e1241345ef6183ebbe960206cf20dfe7d2b12dc3ca243063ca515dd58b1"
	I1212 19:31:30.843223   21147 cri.go:89] found id: ""
	I1212 19:31:30.843295   21147 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 19:31:30.857232   21147 out.go:203] 
	W1212 19:31:30.858414   21147 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T19:31:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T19:31:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 19:31:30.858429   21147 out.go:285] * 
	* 
	W1212 19:31:30.861722   21147 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 19:31:30.862919   21147 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-410014 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.24s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-5ff678cb9-cvcw2" [b1c42eef-c929-478f-ba00-c080b664e6de] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003692192s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-410014 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-410014 addons disable yakd --alsologtostderr -v=1: exit status 11 (251.757207ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 19:31:38.543251   21814 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:31:38.543601   21814 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:31:38.543616   21814 out.go:374] Setting ErrFile to fd 2...
	I1212 19:31:38.543623   21814 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:31:38.543950   21814 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 19:31:38.544291   21814 mustload.go:66] Loading cluster: addons-410014
	I1212 19:31:38.544718   21814 config.go:182] Loaded profile config "addons-410014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:31:38.544746   21814 addons.go:622] checking whether the cluster is paused
	I1212 19:31:38.544875   21814 config.go:182] Loaded profile config "addons-410014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:31:38.544893   21814 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:31:38.545390   21814 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:31:38.565676   21814 ssh_runner.go:195] Run: systemctl --version
	I1212 19:31:38.565736   21814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:31:38.585991   21814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:31:38.685106   21814 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 19:31:38.685182   21814 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 19:31:38.713853   21814 cri.go:89] found id: "76571c6136b47298f32a0b1ec18cc99af7b389e2b5dbe495f1629d5e2703e8de"
	I1212 19:31:38.713874   21814 cri.go:89] found id: "7f1863e417224e030ce9a9d2e2f7136cbd19e03c826aa5403bad131c91dc9b93"
	I1212 19:31:38.713880   21814 cri.go:89] found id: "5cd7aec5d9bbe817f190772e730768355d87901e7fc80e8a00143cfda71f1d24"
	I1212 19:31:38.713883   21814 cri.go:89] found id: "9d3792f63458445175e10a14dca33e6972f1836499b2b34aec58251c1aae25fd"
	I1212 19:31:38.713886   21814 cri.go:89] found id: "e9263571afd91c7084801115b897309fd4e916d9659237639dc9fac1f242fef2"
	I1212 19:31:38.713889   21814 cri.go:89] found id: "cb6005d68d9a96016741ff117661915a95e429ed8aef7fc5832c7b9335aaf7c7"
	I1212 19:31:38.713891   21814 cri.go:89] found id: "24cd917601d91959bf6581b8065f18f069164f1f6db8c6dada400521a951cfd4"
	I1212 19:31:38.713895   21814 cri.go:89] found id: "3693e2f08cab47eb8561d548d6df8eed556b0c840ae77214117a4018d1ef6385"
	I1212 19:31:38.713900   21814 cri.go:89] found id: "d28f0bff28d4accb708ad0b723f3d8b457fae7a4905c32e5e61779549873d5fc"
	I1212 19:31:38.713907   21814 cri.go:89] found id: "355df816d72de65cb71104e428e2341e612b893f1f784bd69a95e8b4d25b2f15"
	I1212 19:31:38.713912   21814 cri.go:89] found id: "5378136ec2be93b30b5623db6080933a844f5c1ee7ab34812921d4ca12a2ec39"
	I1212 19:31:38.713917   21814 cri.go:89] found id: "448e73e6cab5275217215830ac10a0c4029a9e407db13c1c6352947a6c77b9f5"
	I1212 19:31:38.713922   21814 cri.go:89] found id: "f6b724bf055e80101eea67abe6cf1548be806ad8088ad9e482367bfb38836269"
	I1212 19:31:38.713926   21814 cri.go:89] found id: "d5470be0baf62349e9e352868eba4d0011d02e21667dbb5d805f2985be00ccdc"
	I1212 19:31:38.713932   21814 cri.go:89] found id: "54eea5a21a6fd70fe1e9d45f9a1a45437a77811fe92301178f0b99e163b4c669"
	I1212 19:31:38.713947   21814 cri.go:89] found id: "203522604a8b916bc3a950e033ca9bb281723c416d6c4583c96dbe77e514160d"
	I1212 19:31:38.713957   21814 cri.go:89] found id: "31bb87c8f5b440ef6f409980279d69a279ee8873f638c558c1455812526d42f1"
	I1212 19:31:38.713965   21814 cri.go:89] found id: "30de3e37db155bdf4e8d95d19b3266cd29f007ba5c66fdf65486edcc3d9fd8f3"
	I1212 19:31:38.713970   21814 cri.go:89] found id: "57cc761c4f0a4a4897b6a36f7a51c07f8f9b9d7923145c3005d02d41381917ca"
	I1212 19:31:38.713974   21814 cri.go:89] found id: "dea3cfc0d651ab9de775ab172dfcbe74be7ff2c5ac3aa39aa3a32c2ca09c508a"
	I1212 19:31:38.713977   21814 cri.go:89] found id: "d28712ec6c40914cb6c8e08632c1ae630a5201547999a8288f0c04c8dd6cee58"
	I1212 19:31:38.713979   21814 cri.go:89] found id: "22824f98cf9f933f80e85a004433640955a438da23b09eee368ae6a14f2c12f2"
	I1212 19:31:38.713982   21814 cri.go:89] found id: "624cd53ac7dff424e3ce412fbf38f28bdb8f59a04c4a399ca5d5f0b1ec4ee746"
	I1212 19:31:38.713986   21814 cri.go:89] found id: "6fb90e1241345ef6183ebbe960206cf20dfe7d2b12dc3ca243063ca515dd58b1"
	I1212 19:31:38.713990   21814 cri.go:89] found id: ""
	I1212 19:31:38.714050   21814 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 19:31:38.729230   21814 out.go:203] 
	W1212 19:31:38.730436   21814 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T19:31:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T19:31:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 19:31:38.730452   21814 out.go:285] * 
	* 
	W1212 19:31:38.733541   21814 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 19:31:38.734597   21814 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-410014 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.26s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.24s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:353: "amd-gpu-device-plugin-t98v8" [78e1b7d3-1dbb-4ef6-83b4-e047490b8d24] Running
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.003421646s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-410014 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-410014 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (235.96111ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 19:31:36.928065   21605 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:31:36.928384   21605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:31:36.928395   21605 out.go:374] Setting ErrFile to fd 2...
	I1212 19:31:36.928401   21605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:31:36.928612   21605 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 19:31:36.928899   21605 mustload.go:66] Loading cluster: addons-410014
	I1212 19:31:36.929245   21605 config.go:182] Loaded profile config "addons-410014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:31:36.929269   21605 addons.go:622] checking whether the cluster is paused
	I1212 19:31:36.929388   21605 config.go:182] Loaded profile config "addons-410014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:31:36.929405   21605 host.go:66] Checking if "addons-410014" exists ...
	I1212 19:31:36.929799   21605 cli_runner.go:164] Run: docker container inspect addons-410014 --format={{.State.Status}}
	I1212 19:31:36.946737   21605 ssh_runner.go:195] Run: systemctl --version
	I1212 19:31:36.946790   21605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-410014
	I1212 19:31:36.963963   21605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/addons-410014/id_rsa Username:docker}
	I1212 19:31:37.056207   21605 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 19:31:37.056308   21605 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 19:31:37.084785   21605 cri.go:89] found id: "76571c6136b47298f32a0b1ec18cc99af7b389e2b5dbe495f1629d5e2703e8de"
	I1212 19:31:37.084799   21605 cri.go:89] found id: "7f1863e417224e030ce9a9d2e2f7136cbd19e03c826aa5403bad131c91dc9b93"
	I1212 19:31:37.084803   21605 cri.go:89] found id: "5cd7aec5d9bbe817f190772e730768355d87901e7fc80e8a00143cfda71f1d24"
	I1212 19:31:37.084806   21605 cri.go:89] found id: "9d3792f63458445175e10a14dca33e6972f1836499b2b34aec58251c1aae25fd"
	I1212 19:31:37.084809   21605 cri.go:89] found id: "e9263571afd91c7084801115b897309fd4e916d9659237639dc9fac1f242fef2"
	I1212 19:31:37.084812   21605 cri.go:89] found id: "cb6005d68d9a96016741ff117661915a95e429ed8aef7fc5832c7b9335aaf7c7"
	I1212 19:31:37.084815   21605 cri.go:89] found id: "24cd917601d91959bf6581b8065f18f069164f1f6db8c6dada400521a951cfd4"
	I1212 19:31:37.084818   21605 cri.go:89] found id: "3693e2f08cab47eb8561d548d6df8eed556b0c840ae77214117a4018d1ef6385"
	I1212 19:31:37.084820   21605 cri.go:89] found id: "d28f0bff28d4accb708ad0b723f3d8b457fae7a4905c32e5e61779549873d5fc"
	I1212 19:31:37.084827   21605 cri.go:89] found id: "355df816d72de65cb71104e428e2341e612b893f1f784bd69a95e8b4d25b2f15"
	I1212 19:31:37.084830   21605 cri.go:89] found id: "5378136ec2be93b30b5623db6080933a844f5c1ee7ab34812921d4ca12a2ec39"
	I1212 19:31:37.084833   21605 cri.go:89] found id: "448e73e6cab5275217215830ac10a0c4029a9e407db13c1c6352947a6c77b9f5"
	I1212 19:31:37.084841   21605 cri.go:89] found id: "f6b724bf055e80101eea67abe6cf1548be806ad8088ad9e482367bfb38836269"
	I1212 19:31:37.084846   21605 cri.go:89] found id: "d5470be0baf62349e9e352868eba4d0011d02e21667dbb5d805f2985be00ccdc"
	I1212 19:31:37.084853   21605 cri.go:89] found id: "54eea5a21a6fd70fe1e9d45f9a1a45437a77811fe92301178f0b99e163b4c669"
	I1212 19:31:37.084867   21605 cri.go:89] found id: "203522604a8b916bc3a950e033ca9bb281723c416d6c4583c96dbe77e514160d"
	I1212 19:31:37.084874   21605 cri.go:89] found id: "31bb87c8f5b440ef6f409980279d69a279ee8873f638c558c1455812526d42f1"
	I1212 19:31:37.084880   21605 cri.go:89] found id: "30de3e37db155bdf4e8d95d19b3266cd29f007ba5c66fdf65486edcc3d9fd8f3"
	I1212 19:31:37.084885   21605 cri.go:89] found id: "57cc761c4f0a4a4897b6a36f7a51c07f8f9b9d7923145c3005d02d41381917ca"
	I1212 19:31:37.084889   21605 cri.go:89] found id: "dea3cfc0d651ab9de775ab172dfcbe74be7ff2c5ac3aa39aa3a32c2ca09c508a"
	I1212 19:31:37.084892   21605 cri.go:89] found id: "d28712ec6c40914cb6c8e08632c1ae630a5201547999a8288f0c04c8dd6cee58"
	I1212 19:31:37.084896   21605 cri.go:89] found id: "22824f98cf9f933f80e85a004433640955a438da23b09eee368ae6a14f2c12f2"
	I1212 19:31:37.084900   21605 cri.go:89] found id: "624cd53ac7dff424e3ce412fbf38f28bdb8f59a04c4a399ca5d5f0b1ec4ee746"
	I1212 19:31:37.084904   21605 cri.go:89] found id: "6fb90e1241345ef6183ebbe960206cf20dfe7d2b12dc3ca243063ca515dd58b1"
	I1212 19:31:37.084908   21605 cri.go:89] found id: ""
	I1212 19:31:37.084949   21605 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 19:31:37.099206   21605 out.go:203] 
	W1212 19:31:37.100268   21605 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T19:31:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T19:31:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 19:31:37.100295   21605 out.go:285] * 
	* 
	W1212 19:31:37.103123   21605 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 19:31:37.104202   21605 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-410014 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (6.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (2.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 image ls --format table --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-853944 image ls --format table --alsologtostderr: (2.298249258s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-853944 image ls --format table --alsologtostderr:
┌───────┬─────┬──────────┬──────┐
│ IMAGE │ TAG │ IMAGE ID │ SIZE │
└───────┴─────┴──────────┴──────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-853944 image ls --format table --alsologtostderr:
I1212 19:40:22.401301   68381 out.go:360] Setting OutFile to fd 1 ...
I1212 19:40:22.401454   68381 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:40:22.401490   68381 out.go:374] Setting ErrFile to fd 2...
I1212 19:40:22.401509   68381 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:40:22.401851   68381 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
I1212 19:40:22.402701   68381 config.go:182] Loaded profile config "functional-853944": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 19:40:22.402931   68381 config.go:182] Loaded profile config "functional-853944": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 19:40:22.403696   68381 cli_runner.go:164] Run: docker container inspect functional-853944 --format={{.State.Status}}
I1212 19:40:22.429729   68381 ssh_runner.go:195] Run: systemctl --version
I1212 19:40:22.429784   68381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-853944
I1212 19:40:22.454634   68381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/functional-853944/id_rsa Username:docker}
I1212 19:40:22.564659   68381 ssh_runner.go:195] Run: sudo crictl images --output json
I1212 19:40:24.599319   68381 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.034629796s)
W1212 19:40:24.599409   68381 cache_images.go:736] Failed to list images for profile functional-853944 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E1212 19:40:24.596209    7336 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL" filter="image:{}"
time="2025-12-12T19:40:24Z" level=fatal msg="listing images: rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL"
functional_test.go:290: expected │ registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (2.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (2.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 image ls --format json --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-853944 image ls --format json --alsologtostderr: (2.295335814s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-853944 image ls --format json --alsologtostderr:
[]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-853944 image ls --format json --alsologtostderr:
I1212 19:40:22.369559   68373 out.go:360] Setting OutFile to fd 1 ...
I1212 19:40:22.370166   68373 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:40:22.370182   68373 out.go:374] Setting ErrFile to fd 2...
I1212 19:40:22.370189   68373 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:40:22.370530   68373 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
I1212 19:40:22.371367   68373 config.go:182] Loaded profile config "functional-853944": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 19:40:22.371511   68373 config.go:182] Loaded profile config "functional-853944": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 19:40:22.372140   68373 cli_runner.go:164] Run: docker container inspect functional-853944 --format={{.State.Status}}
I1212 19:40:22.400115   68373 ssh_runner.go:195] Run: systemctl --version
I1212 19:40:22.400185   68373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-853944
I1212 19:40:22.426998   68373 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/functional-853944/id_rsa Username:docker}
I1212 19:40:22.539464   68373 ssh_runner.go:195] Run: sudo crictl images --output json
I1212 19:40:24.577109   68373 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.037605541s)
W1212 19:40:24.577205   68373 cache_images.go:736] Failed to list images for profile functional-853944 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E1212 19:40:24.574336    7324 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL" filter="image:{}"
time="2025-12-12T19:40:24Z" level=fatal msg="listing images: rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL"
functional_test.go:290: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (2.30s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.24s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-186519 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-186519 --output=json --user=testUser: exit status 80 (2.239014006s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b56cd5ef-4267-4493-88d3-121a574306ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-186519 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"eaefbcba-3381-4ac5-b001-490229d20f0c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-12T19:52:13Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"b1e4a2f6-a7cd-47f4-a26f-baaecf572506","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-186519 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.24s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.78s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-186519 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-186519 --output=json --user=testUser: exit status 80 (1.780214081s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7bf835f4-cdf9-4123-aa75-0fdd0f39ea10","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-186519 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"947f7e29-bfa1-409a-aa27-8f48241d078d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-12T19:52:14Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"fefcaf02-43f9-40e6-98f4-7141d40dec1d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-186519 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.78s)

                                                
                                    
x
+
TestPause/serial/Pause (6.96s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-243084 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-243084 --alsologtostderr -v=5: exit status 80 (1.645044293s)

                                                
                                                
-- stdout --
	* Pausing node pause-243084 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:05:34.751908  234693 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:05:34.752031  234693 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:05:34.752042  234693 out.go:374] Setting ErrFile to fd 2...
	I1212 20:05:34.752049  234693 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:05:34.752255  234693 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 20:05:34.752514  234693 out.go:368] Setting JSON to false
	I1212 20:05:34.752536  234693 mustload.go:66] Loading cluster: pause-243084
	I1212 20:05:34.752930  234693 config.go:182] Loaded profile config "pause-243084": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:05:34.753332  234693 cli_runner.go:164] Run: docker container inspect pause-243084 --format={{.State.Status}}
	I1212 20:05:34.771999  234693 host.go:66] Checking if "pause-243084" exists ...
	I1212 20:05:34.772409  234693 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:05:34.833053  234693 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-12 20:05:34.823188326 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 20:05:34.833724  234693 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22112/minikube-v1.37.0-1765505725-22112-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765505725-22112/minikube-v1.37.0-1765505725-22112-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765505725-22112-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-243084 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1212 20:05:34.835442  234693 out.go:179] * Pausing node pause-243084 ... 
	I1212 20:05:34.836489  234693 host.go:66] Checking if "pause-243084" exists ...
	I1212 20:05:34.836790  234693 ssh_runner.go:195] Run: systemctl --version
	I1212 20:05:34.836834  234693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-243084
	I1212 20:05:34.854568  234693 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33024 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/pause-243084/id_rsa Username:docker}
	I1212 20:05:34.952185  234693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:05:34.965570  234693 pause.go:52] kubelet running: true
	I1212 20:05:34.965640  234693 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 20:05:35.120389  234693 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 20:05:35.120475  234693 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 20:05:35.197149  234693 cri.go:89] found id: "e8f370c2d2aa697861a24de574a97463dd209390cdae8a867d94358874809cb8"
	I1212 20:05:35.197171  234693 cri.go:89] found id: "1f4f219d24da1cb0e1ed88f1802750597f260740eb422b534193439a1fa35e5e"
	I1212 20:05:35.197178  234693 cri.go:89] found id: "0d758315dc8b6433c0715bcbe02b1b392f42622af50a9126615d035b81a7334a"
	I1212 20:05:35.197182  234693 cri.go:89] found id: "1546a453b9b14d1a23439d71cf0e13e59110c6e9fccec6b1fd602c89ff0a23f7"
	I1212 20:05:35.197187  234693 cri.go:89] found id: "a7198e0776a35180f558414147085cb0f0bcb58673b28c2b7c096805999ac9d4"
	I1212 20:05:35.197206  234693 cri.go:89] found id: "47d1914f1d154a7648d210a0ac3121cdfee31a410a61529ef122381bd3ee2fe4"
	I1212 20:05:35.197217  234693 cri.go:89] found id: "a1e300479c0a86bae02ef355d09dbdd9889387d9785197a8a19ae54bd52c13c9"
	I1212 20:05:35.197222  234693 cri.go:89] found id: ""
	I1212 20:05:35.197263  234693 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 20:05:35.212140  234693 retry.go:31] will retry after 166.535849ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:05:35Z" level=error msg="open /run/runc: no such file or directory"
	I1212 20:05:35.379564  234693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:05:35.393267  234693 pause.go:52] kubelet running: false
	I1212 20:05:35.393343  234693 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 20:05:35.530382  234693 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 20:05:35.530474  234693 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 20:05:35.622484  234693 cri.go:89] found id: "e8f370c2d2aa697861a24de574a97463dd209390cdae8a867d94358874809cb8"
	I1212 20:05:35.622506  234693 cri.go:89] found id: "1f4f219d24da1cb0e1ed88f1802750597f260740eb422b534193439a1fa35e5e"
	I1212 20:05:35.622513  234693 cri.go:89] found id: "0d758315dc8b6433c0715bcbe02b1b392f42622af50a9126615d035b81a7334a"
	I1212 20:05:35.622519  234693 cri.go:89] found id: "1546a453b9b14d1a23439d71cf0e13e59110c6e9fccec6b1fd602c89ff0a23f7"
	I1212 20:05:35.622523  234693 cri.go:89] found id: "a7198e0776a35180f558414147085cb0f0bcb58673b28c2b7c096805999ac9d4"
	I1212 20:05:35.622528  234693 cri.go:89] found id: "47d1914f1d154a7648d210a0ac3121cdfee31a410a61529ef122381bd3ee2fe4"
	I1212 20:05:35.622533  234693 cri.go:89] found id: "a1e300479c0a86bae02ef355d09dbdd9889387d9785197a8a19ae54bd52c13c9"
	I1212 20:05:35.622538  234693 cri.go:89] found id: ""
	I1212 20:05:35.622584  234693 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 20:05:35.661711  234693 retry.go:31] will retry after 453.92413ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:05:35Z" level=error msg="open /run/runc: no such file or directory"
	I1212 20:05:36.115962  234693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:05:36.129539  234693 pause.go:52] kubelet running: false
	I1212 20:05:36.129603  234693 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 20:05:36.239717  234693 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 20:05:36.239784  234693 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 20:05:36.311869  234693 cri.go:89] found id: "e8f370c2d2aa697861a24de574a97463dd209390cdae8a867d94358874809cb8"
	I1212 20:05:36.311892  234693 cri.go:89] found id: "1f4f219d24da1cb0e1ed88f1802750597f260740eb422b534193439a1fa35e5e"
	I1212 20:05:36.311902  234693 cri.go:89] found id: "0d758315dc8b6433c0715bcbe02b1b392f42622af50a9126615d035b81a7334a"
	I1212 20:05:36.311907  234693 cri.go:89] found id: "1546a453b9b14d1a23439d71cf0e13e59110c6e9fccec6b1fd602c89ff0a23f7"
	I1212 20:05:36.311911  234693 cri.go:89] found id: "a7198e0776a35180f558414147085cb0f0bcb58673b28c2b7c096805999ac9d4"
	I1212 20:05:36.311917  234693 cri.go:89] found id: "47d1914f1d154a7648d210a0ac3121cdfee31a410a61529ef122381bd3ee2fe4"
	I1212 20:05:36.311921  234693 cri.go:89] found id: "a1e300479c0a86bae02ef355d09dbdd9889387d9785197a8a19ae54bd52c13c9"
	I1212 20:05:36.311926  234693 cri.go:89] found id: ""
	I1212 20:05:36.311970  234693 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 20:05:36.330745  234693 out.go:203] 
	W1212 20:05:36.332393  234693 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:05:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:05:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 20:05:36.332408  234693 out.go:285] * 
	* 
	W1212 20:05:36.336509  234693 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:05:36.337873  234693 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-243084 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-243084
helpers_test.go:244: (dbg) docker inspect pause-243084:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f5dbceb575f003ef2f5f9090af4bc08032a1fe5b5c0bebc2032d1c4e33b8f228",
	        "Created": "2025-12-12T20:04:51.034588781Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 222280,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T20:04:51.064524552Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ca69fb46d667873e297ff8975852d6be818eb624529e750e2b09ff0d3b0c367",
	        "ResolvConfPath": "/var/lib/docker/containers/f5dbceb575f003ef2f5f9090af4bc08032a1fe5b5c0bebc2032d1c4e33b8f228/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f5dbceb575f003ef2f5f9090af4bc08032a1fe5b5c0bebc2032d1c4e33b8f228/hostname",
	        "HostsPath": "/var/lib/docker/containers/f5dbceb575f003ef2f5f9090af4bc08032a1fe5b5c0bebc2032d1c4e33b8f228/hosts",
	        "LogPath": "/var/lib/docker/containers/f5dbceb575f003ef2f5f9090af4bc08032a1fe5b5c0bebc2032d1c4e33b8f228/f5dbceb575f003ef2f5f9090af4bc08032a1fe5b5c0bebc2032d1c4e33b8f228-json.log",
	        "Name": "/pause-243084",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-243084:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-243084",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f5dbceb575f003ef2f5f9090af4bc08032a1fe5b5c0bebc2032d1c4e33b8f228",
	                "LowerDir": "/var/lib/docker/overlay2/917b47d0f509407dff22b3e08064817b2d53db5d63951512c2439820693214c0-init/diff:/var/lib/docker/overlay2/87d8f1d6be832394c20ec55eb084b3f4a084ca786856584bd9e90cdb45432d3c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/917b47d0f509407dff22b3e08064817b2d53db5d63951512c2439820693214c0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/917b47d0f509407dff22b3e08064817b2d53db5d63951512c2439820693214c0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/917b47d0f509407dff22b3e08064817b2d53db5d63951512c2439820693214c0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-243084",
	                "Source": "/var/lib/docker/volumes/pause-243084/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-243084",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-243084",
	                "name.minikube.sigs.k8s.io": "pause-243084",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "6689403c076ffea9330dc333b31434dbdcb967b7b51a14d2de152fe1ca429278",
	            "SandboxKey": "/var/run/docker/netns/6689403c076f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33024"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33025"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33028"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33026"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33027"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-243084": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "26c9576ee4f54a6be448aa8018825671d731a8a47aeaf641eb7475f1c232d040",
	                    "EndpointID": "40b14d44a1b632fa467bceaf39b056e674ecb2d173135186020f8eaf77a27be5",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "be:d1:85:56:ef:93",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-243084",
	                        "f5dbceb575f0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-243084 -n pause-243084
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-243084 -n pause-243084: exit status 2 (330.274815ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-243084 logs -n 25
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                           ARGS                                                                                                            │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-789448 sudo cri-dockerd --version                                                                                                                                                                               │ cilium-789448             │ jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │                     │
	│ ssh     │ -p cilium-789448 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                 │ cilium-789448             │ jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │                     │
	│ ssh     │ -p cilium-789448 sudo systemctl cat containerd --no-pager                                                                                                                                                                 │ cilium-789448             │ jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │                     │
	│ ssh     │ -p cilium-789448 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                          │ cilium-789448             │ jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │                     │
	│ ssh     │ -p cilium-789448 sudo cat /etc/containerd/config.toml                                                                                                                                                                     │ cilium-789448             │ jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │                     │
	│ ssh     │ -p cilium-789448 sudo containerd config dump                                                                                                                                                                              │ cilium-789448             │ jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │                     │
	│ ssh     │ -p cilium-789448 sudo systemctl status crio --all --full --no-pager                                                                                                                                                       │ cilium-789448             │ jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │                     │
	│ ssh     │ -p cilium-789448 sudo systemctl cat crio --no-pager                                                                                                                                                                       │ cilium-789448             │ jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │                     │
	│ ssh     │ -p cilium-789448 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                             │ cilium-789448             │ jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │                     │
	│ ssh     │ -p cilium-789448 sudo crio config                                                                                                                                                                                         │ cilium-789448             │ jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │                     │
	│ delete  │ -p cilium-789448                                                                                                                                                                                                          │ cilium-789448             │ jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ stop    │ -p NoKubernetes-562130                                                                                                                                                                                                    │ NoKubernetes-562130       │ jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ start   │ -p pause-243084 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                                                                                                 │ pause-243084              │ jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:05 UTC │
	│ start   │ -p NoKubernetes-562130 --driver=docker  --container-runtime=crio                                                                                                                                                          │ NoKubernetes-562130       │ jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ ssh     │ -p NoKubernetes-562130 sudo systemctl is-active --quiet service kubelet                                                                                                                                                   │ NoKubernetes-562130       │ jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │                     │
	│ delete  │ -p NoKubernetes-562130                                                                                                                                                                                                    │ NoKubernetes-562130       │ jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ start   │ -p cert-expiration-070436 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                    │ cert-expiration-070436    │ jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:05 UTC │
	│ delete  │ -p force-systemd-env-361023                                                                                                                                                                                               │ force-systemd-env-361023  │ jenkins │ v1.37.0 │ 12 Dec 25 20:05 UTC │ 12 Dec 25 20:05 UTC │
	│ start   │ -p cert-options-427408 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio │ cert-options-427408       │ jenkins │ v1.37.0 │ 12 Dec 25 20:05 UTC │ 12 Dec 25 20:05 UTC │
	│ start   │ -p pause-243084 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                          │ pause-243084              │ jenkins │ v1.37.0 │ 12 Dec 25 20:05 UTC │ 12 Dec 25 20:05 UTC │
	│ ssh     │ cert-options-427408 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                               │ cert-options-427408       │ jenkins │ v1.37.0 │ 12 Dec 25 20:05 UTC │ 12 Dec 25 20:05 UTC │
	│ ssh     │ -p cert-options-427408 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                             │ cert-options-427408       │ jenkins │ v1.37.0 │ 12 Dec 25 20:05 UTC │ 12 Dec 25 20:05 UTC │
	│ delete  │ -p cert-options-427408                                                                                                                                                                                                    │ cert-options-427408       │ jenkins │ v1.37.0 │ 12 Dec 25 20:05 UTC │ 12 Dec 25 20:05 UTC │
	│ pause   │ -p pause-243084 --alsologtostderr -v=5                                                                                                                                                                                    │ pause-243084              │ jenkins │ v1.37.0 │ 12 Dec 25 20:05 UTC │                     │
	│ start   │ -p kubernetes-upgrade-991615 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                  │ kubernetes-upgrade-991615 │ jenkins │ v1.37.0 │ 12 Dec 25 20:05 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:05:34
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:05:34.913520  234824 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:05:34.913780  234824 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:05:34.913798  234824 out.go:374] Setting ErrFile to fd 2...
	I1212 20:05:34.913806  234824 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:05:34.913994  234824 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 20:05:34.914482  234824 out.go:368] Setting JSON to false
	I1212 20:05:34.915598  234824 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2882,"bootTime":1765567053,"procs":369,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 20:05:34.915646  234824 start.go:143] virtualization: kvm guest
	I1212 20:05:34.917514  234824 out.go:179] * [kubernetes-upgrade-991615] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 20:05:34.918664  234824 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:05:34.918662  234824 notify.go:221] Checking for updates...
	I1212 20:05:34.920952  234824 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:05:34.922149  234824 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 20:05:34.923067  234824 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-5703/.minikube
	I1212 20:05:34.924072  234824 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 20:05:34.925014  234824 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:05:34.927252  234824 config.go:182] Loaded profile config "cert-expiration-070436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:05:34.927385  234824 config.go:182] Loaded profile config "pause-243084": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:05:34.927457  234824 config.go:182] Loaded profile config "running-upgrade-569692": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1212 20:05:34.927548  234824 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:05:34.953423  234824 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1212 20:05:34.953538  234824 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:05:35.019504  234824 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-12 20:05:35.003688853 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 20:05:35.019654  234824 docker.go:319] overlay module found
	I1212 20:05:35.023072  234824 out.go:179] * Using the docker driver based on user configuration
	I1212 20:05:35.024461  234824 start.go:309] selected driver: docker
	I1212 20:05:35.024476  234824 start.go:927] validating driver "docker" against <nil>
	I1212 20:05:35.024491  234824 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:05:35.025066  234824 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:05:35.088074  234824 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-12 20:05:35.077725997 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 20:05:35.088220  234824 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 20:05:35.088463  234824 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1212 20:05:35.091151  234824 out.go:179] * Using Docker driver with root privileges
	I1212 20:05:35.092561  234824 cni.go:84] Creating CNI manager for ""
	I1212 20:05:35.092632  234824 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:05:35.092646  234824 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 20:05:35.092721  234824 start.go:353] cluster config:
	{Name:kubernetes-upgrade-991615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-991615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:05:35.093968  234824 out.go:179] * Starting "kubernetes-upgrade-991615" primary control-plane node in "kubernetes-upgrade-991615" cluster
	I1212 20:05:35.095053  234824 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 20:05:35.096329  234824 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 20:05:35.097522  234824 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1212 20:05:35.097560  234824 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1212 20:05:35.097569  234824 cache.go:65] Caching tarball of preloaded images
	I1212 20:05:35.097597  234824 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 20:05:35.097669  234824 preload.go:238] Found /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 20:05:35.097686  234824 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1212 20:05:35.097801  234824 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/kubernetes-upgrade-991615/config.json ...
	I1212 20:05:35.097829  234824 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/kubernetes-upgrade-991615/config.json: {Name:mk35618864cddf3d958c4288d850c66d3bd18191 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:05:35.119905  234824 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 20:05:35.119927  234824 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 20:05:35.119944  234824 cache.go:243] Successfully downloaded all kic artifacts
	I1212 20:05:35.119981  234824 start.go:360] acquireMachinesLock for kubernetes-upgrade-991615: {Name:mk12602d3a1a2f0e7a43419f47822e8142f67bb4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:05:35.120093  234824 start.go:364] duration metric: took 88.502µs to acquireMachinesLock for "kubernetes-upgrade-991615"
	I1212 20:05:35.120123  234824 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-991615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-991615 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:05:35.120216  234824 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.452163454Z" level=info msg="RDT not available in the host system"
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.452172752Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.452933415Z" level=info msg="Conmon does support the --sync option"
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.452951339Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.452964284Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.453714642Z" level=info msg="Conmon does support the --sync option"
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.453728564Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.457987479Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.458007423Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.458731569Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \
"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nr
i]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.459259613Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.459331098Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.540951152Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-4dbtr Namespace:kube-system ID:5a85b88d900d4eb155b5f64d7fefe4d27f91bdd536b9dcf0dfa287ac4fe1edbb UID:92ace16c-9772-4122-ae25-c98ba185316c NetNS:/var/run/netns/c74105f9-3498-416d-a089-c7632b773b2b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000902180}] Aliases:map[]}"
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.54111227Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-4dbtr for CNI network kindnet (type=ptp)"
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.541504972Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.541525499Z" level=info msg="Starting seccomp notifier watcher"
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.541567075Z" level=info msg="Create NRI interface"
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.541670365Z" level=info msg="built-in NRI default validator is disabled"
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.541684432Z" level=info msg="runtime interface created"
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.541697472Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.54170522Z" level=info msg="runtime interface starting up..."
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.541712289Z" level=info msg="starting plugins..."
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.54172617Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.542009086Z" level=info msg="No systemd watchdog enabled"
	Dec 12 20:05:31 pause-243084 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	e8f370c2d2aa6       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   11 seconds ago      Running             coredns                   0                   5a85b88d900d4       coredns-66bc5c9577-4dbtr               kube-system
	1f4f219d24da1       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   22 seconds ago      Running             kindnet-cni               0                   b6b2a5bab5f1f       kindnet-72r8q                          kube-system
	0d758315dc8b6       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   22 seconds ago      Running             kube-proxy                0                   65da3b33429fe       kube-proxy-768fz                       kube-system
	1546a453b9b14       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   32 seconds ago      Running             kube-apiserver            0                   46f333a596886       kube-apiserver-pause-243084            kube-system
	a7198e0776a35       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   32 seconds ago      Running             kube-scheduler            0                   b7fe9b3ef273a       kube-scheduler-pause-243084            kube-system
	47d1914f1d154       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   32 seconds ago      Running             etcd                      0                   777e150d8ef18       etcd-pause-243084                      kube-system
	a1e300479c0a8       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   32 seconds ago      Running             kube-controller-manager   0                   05e554299def8       kube-controller-manager-pause-243084   kube-system
	
	
	==> coredns [e8f370c2d2aa697861a24de574a97463dd209390cdae8a867d94358874809cb8] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41056 - 38118 "HINFO IN 8470862106684823630.6881257276891624079. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.11987632s
	
	
	==> describe nodes <==
	Name:               pause-243084
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-243084
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300
	                    minikube.k8s.io/name=pause-243084
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T20_05_10_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 20:05:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-243084
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 20:05:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 20:05:25 +0000   Fri, 12 Dec 2025 20:05:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 20:05:25 +0000   Fri, 12 Dec 2025 20:05:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 20:05:25 +0000   Fri, 12 Dec 2025 20:05:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 20:05:25 +0000   Fri, 12 Dec 2025 20:05:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    pause-243084
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c9831a2813dcaeb3cc17b596693b7bac
	  System UUID:                5e6eac7e-2836-4b23-bbfb-bfc6ae4d214a
	  Boot ID:                    50e046d9-33b0-4107-918f-f0a8d9513b10
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-4dbtr                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23s
	  kube-system                 etcd-pause-243084                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         28s
	  kube-system                 kindnet-72r8q                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      23s
	  kube-system                 kube-apiserver-pause-243084             250m (3%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-controller-manager-pause-243084    200m (2%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-proxy-768fz                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 kube-scheduler-pause-243084             100m (1%)     0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 22s   kube-proxy       
	  Normal  Starting                 28s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28s   kubelet          Node pause-243084 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28s   kubelet          Node pause-243084 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28s   kubelet          Node pause-243084 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           24s   node-controller  Node pause-243084 event: Registered Node pause-243084 in Controller
	  Normal  NodeReady                12s   kubelet          Node pause-243084 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.086327] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026088] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.855140] kauditd_printk_skb: 47 callbacks suppressed
	[Dec12 19:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.016395] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023890] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023861] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023912] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023894] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +2.047754] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +4.031589] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +8.448131] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[Dec12 19:32] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[ +32.252714] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	
	
	==> etcd [47d1914f1d154a7648d210a0ac3121cdfee31a410a61529ef122381bd3ee2fe4] <==
	{"level":"warn","ts":"2025-12-12T20:05:06.256445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:05:06.264012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:05:06.270915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:05:06.277001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:05:06.283718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:05:06.291062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:05:06.297409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:05:06.304451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:05:06.311071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:05:06.317681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:05:06.324788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:05:06.330654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:05:06.347389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:05:06.354664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:05:06.360549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:05:06.410597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41316","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-12T20:05:11.787085Z","caller":"traceutil/trace.go:172","msg":"trace[939889435] transaction","detail":"{read_only:false; response_revision:284; number_of_response:1; }","duration":"119.850639ms","start":"2025-12-12T20:05:11.667219Z","end":"2025-12-12T20:05:11.787069Z","steps":["trace[939889435] 'process raft request'  (duration: 119.757639ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T20:05:11.947369Z","caller":"traceutil/trace.go:172","msg":"trace[259852444] transaction","detail":"{read_only:false; response_revision:285; number_of_response:1; }","duration":"130.253886ms","start":"2025-12-12T20:05:11.817102Z","end":"2025-12-12T20:05:11.947356Z","steps":["trace[259852444] 'process raft request'  (duration: 97.138624ms)","trace[259852444] 'compare'  (duration: 33.010124ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-12T20:05:12.463870Z","caller":"traceutil/trace.go:172","msg":"trace[532529038] transaction","detail":"{read_only:false; response_revision:287; number_of_response:1; }","duration":"193.785731ms","start":"2025-12-12T20:05:12.270067Z","end":"2025-12-12T20:05:12.463853Z","steps":["trace[532529038] 'process raft request'  (duration: 193.698055ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-12T20:05:12.591734Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.881282ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/ttl-controller\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-12T20:05:12.591811Z","caller":"traceutil/trace.go:172","msg":"trace[468229782] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/ttl-controller; range_end:; response_count:0; response_revision:287; }","duration":"124.979127ms","start":"2025-12-12T20:05:12.466817Z","end":"2025-12-12T20:05:12.591796Z","steps":["trace[468229782] 'range keys from in-memory index tree'  (duration: 124.811639ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T20:05:12.718944Z","caller":"traceutil/trace.go:172","msg":"trace[47314688] linearizableReadLoop","detail":"{readStateIndex:298; appliedIndex:298; }","duration":"108.472266ms","start":"2025-12-12T20:05:12.610449Z","end":"2025-12-12T20:05:12.718921Z","steps":["trace[47314688] 'read index received'  (duration: 108.466626ms)","trace[47314688] 'applied index is now lower than readState.Index'  (duration: 4.949µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-12T20:05:12.719059Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"108.594673ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-12T20:05:12.719090Z","caller":"traceutil/trace.go:172","msg":"trace[1987585637] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:0; response_revision:287; }","duration":"108.641823ms","start":"2025-12-12T20:05:12.610440Z","end":"2025-12-12T20:05:12.719082Z","steps":["trace[1987585637] 'agreement among raft nodes before linearized reading'  (duration: 108.55927ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T20:05:12.719100Z","caller":"traceutil/trace.go:172","msg":"trace[1975015979] transaction","detail":"{read_only:false; response_revision:288; number_of_response:1; }","duration":"122.32924ms","start":"2025-12-12T20:05:12.596758Z","end":"2025-12-12T20:05:12.719087Z","steps":["trace[1975015979] 'process raft request'  (duration: 122.234344ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:05:37 up 48 min,  0 user,  load average: 3.01, 1.94, 1.33
	Linux pause-243084 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1f4f219d24da1cb0e1ed88f1802750597f260740eb422b534193439a1fa35e5e] <==
	I1212 20:05:15.157063       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1212 20:05:15.159382       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1212 20:05:15.159542       1 main.go:148] setting mtu 1500 for CNI 
	I1212 20:05:15.159557       1 main.go:178] kindnetd IP family: "ipv4"
	I1212 20:05:15.159583       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-12T20:05:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1212 20:05:15.359983       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1212 20:05:15.360010       1 controller.go:381] "Waiting for informer caches to sync"
	I1212 20:05:15.360021       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1212 20:05:15.361027       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1212 20:05:15.854929       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1212 20:05:15.854970       1 metrics.go:72] Registering metrics
	I1212 20:05:15.855089       1 controller.go:711] "Syncing nftables rules"
	I1212 20:05:25.362381       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1212 20:05:25.362448       1 main.go:301] handling current node
	I1212 20:05:35.367406       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1212 20:05:35.367443       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1546a453b9b14d1a23439d71cf0e13e59110c6e9fccec6b1fd602c89ff0a23f7] <==
	I1212 20:05:06.842541       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1212 20:05:06.842594       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1212 20:05:06.844185       1 controller.go:667] quota admission added evaluator for: namespaces
	I1212 20:05:06.850780       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 20:05:06.852012       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1212 20:05:06.857555       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 20:05:06.857721       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1212 20:05:07.026064       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 20:05:07.746956       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1212 20:05:07.750629       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1212 20:05:07.750643       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 20:05:08.250154       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 20:05:08.286423       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 20:05:08.350457       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1212 20:05:08.356214       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1212 20:05:08.357469       1 controller.go:667] quota admission added evaluator for: endpoints
	I1212 20:05:08.361301       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 20:05:08.772165       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1212 20:05:09.600553       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1212 20:05:09.609386       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1212 20:05:09.620135       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1212 20:05:14.523045       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1212 20:05:14.673634       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 20:05:14.676706       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 20:05:14.770944       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [a1e300479c0a86bae02ef355d09dbdd9889387d9785197a8a19ae54bd52c13c9] <==
	I1212 20:05:13.766209       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1212 20:05:13.769497       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1212 20:05:13.769611       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1212 20:05:13.769622       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1212 20:05:13.770042       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1212 20:05:13.770059       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1212 20:05:13.770067       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1212 20:05:13.770489       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1212 20:05:13.770515       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1212 20:05:13.770607       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1212 20:05:13.770695       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-243084"
	I1212 20:05:13.770754       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1212 20:05:13.770389       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1212 20:05:13.771156       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1212 20:05:13.771297       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1212 20:05:13.772860       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1212 20:05:13.776584       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1212 20:05:13.776775       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1212 20:05:13.777231       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 20:05:13.782708       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1212 20:05:13.788981       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1212 20:05:13.791259       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1212 20:05:13.793474       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1212 20:05:13.796851       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1212 20:05:28.772716       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [0d758315dc8b6433c0715bcbe02b1b392f42622af50a9126615d035b81a7334a] <==
	I1212 20:05:14.932519       1 server_linux.go:53] "Using iptables proxy"
	I1212 20:05:15.010953       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1212 20:05:15.111509       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1212 20:05:15.111548       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1212 20:05:15.111629       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 20:05:15.132326       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 20:05:15.132381       1 server_linux.go:132] "Using iptables Proxier"
	I1212 20:05:15.138027       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 20:05:15.138594       1 server.go:527] "Version info" version="v1.34.2"
	I1212 20:05:15.138625       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 20:05:15.141952       1 config.go:200] "Starting service config controller"
	I1212 20:05:15.141981       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 20:05:15.142016       1 config.go:106] "Starting endpoint slice config controller"
	I1212 20:05:15.142022       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 20:05:15.142036       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 20:05:15.142042       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 20:05:15.142162       1 config.go:309] "Starting node config controller"
	I1212 20:05:15.142179       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 20:05:15.142188       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 20:05:15.242155       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1212 20:05:15.242177       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1212 20:05:15.242177       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a7198e0776a35180f558414147085cb0f0bcb58673b28c2b7c096805999ac9d4] <==
	E1212 20:05:06.792385       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1212 20:05:06.792391       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1212 20:05:06.792429       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1212 20:05:06.792444       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1212 20:05:06.792457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1212 20:05:06.792478       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1212 20:05:06.792506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1212 20:05:06.792527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1212 20:05:06.792572       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1212 20:05:06.792572       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1212 20:05:06.792670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1212 20:05:06.792675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1212 20:05:07.666751       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1212 20:05:07.722680       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1212 20:05:07.751782       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1212 20:05:07.778885       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1212 20:05:07.804868       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1212 20:05:07.811849       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1212 20:05:07.852089       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1212 20:05:07.962700       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1212 20:05:07.963851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1212 20:05:08.041226       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1212 20:05:08.043044       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1212 20:05:08.060741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1212 20:05:09.489308       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 12 20:05:10 pause-243084 kubelet[1340]: E1212 20:05:10.537363    1340 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-243084\" already exists" pod="kube-system/kube-apiserver-pause-243084"
	Dec 12 20:05:10 pause-243084 kubelet[1340]: E1212 20:05:10.539241    1340 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-243084\" already exists" pod="kube-system/kube-scheduler-pause-243084"
	Dec 12 20:05:10 pause-243084 kubelet[1340]: I1212 20:05:10.539300    1340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-243084" podStartSLOduration=1.53925662 podStartE2EDuration="1.53925662s" podCreationTimestamp="2025-12-12 20:05:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 20:05:10.525839402 +0000 UTC m=+1.142934591" watchObservedRunningTime="2025-12-12 20:05:10.53925662 +0000 UTC m=+1.156351808"
	Dec 12 20:05:10 pause-243084 kubelet[1340]: E1212 20:05:10.539543    1340 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-243084\" already exists" pod="kube-system/kube-controller-manager-pause-243084"
	Dec 12 20:05:10 pause-243084 kubelet[1340]: I1212 20:05:10.551162    1340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-243084" podStartSLOduration=1.551147287 podStartE2EDuration="1.551147287s" podCreationTimestamp="2025-12-12 20:05:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 20:05:10.539500827 +0000 UTC m=+1.156596014" watchObservedRunningTime="2025-12-12 20:05:10.551147287 +0000 UTC m=+1.168242473"
	Dec 12 20:05:13 pause-243084 kubelet[1340]: I1212 20:05:13.844251    1340 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 12 20:05:13 pause-243084 kubelet[1340]: I1212 20:05:13.845087    1340 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 12 20:05:14 pause-243084 kubelet[1340]: I1212 20:05:14.614790    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/48530d03-089d-4540-9ce7-f68263447b90-lib-modules\") pod \"kindnet-72r8q\" (UID: \"48530d03-089d-4540-9ce7-f68263447b90\") " pod="kube-system/kindnet-72r8q"
	Dec 12 20:05:14 pause-243084 kubelet[1340]: I1212 20:05:14.614839    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c3bd36ee-1911-4523-aef2-cd8738331b50-lib-modules\") pod \"kube-proxy-768fz\" (UID: \"c3bd36ee-1911-4523-aef2-cd8738331b50\") " pod="kube-system/kube-proxy-768fz"
	Dec 12 20:05:14 pause-243084 kubelet[1340]: I1212 20:05:14.614874    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/48530d03-089d-4540-9ce7-f68263447b90-xtables-lock\") pod \"kindnet-72r8q\" (UID: \"48530d03-089d-4540-9ce7-f68263447b90\") " pod="kube-system/kindnet-72r8q"
	Dec 12 20:05:14 pause-243084 kubelet[1340]: I1212 20:05:14.614924    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c3bd36ee-1911-4523-aef2-cd8738331b50-kube-proxy\") pod \"kube-proxy-768fz\" (UID: \"c3bd36ee-1911-4523-aef2-cd8738331b50\") " pod="kube-system/kube-proxy-768fz"
	Dec 12 20:05:14 pause-243084 kubelet[1340]: I1212 20:05:14.614955    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c3bd36ee-1911-4523-aef2-cd8738331b50-xtables-lock\") pod \"kube-proxy-768fz\" (UID: \"c3bd36ee-1911-4523-aef2-cd8738331b50\") " pod="kube-system/kube-proxy-768fz"
	Dec 12 20:05:14 pause-243084 kubelet[1340]: I1212 20:05:14.614987    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/48530d03-089d-4540-9ce7-f68263447b90-cni-cfg\") pod \"kindnet-72r8q\" (UID: \"48530d03-089d-4540-9ce7-f68263447b90\") " pod="kube-system/kindnet-72r8q"
	Dec 12 20:05:14 pause-243084 kubelet[1340]: I1212 20:05:14.615038    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtwkd\" (UniqueName: \"kubernetes.io/projected/48530d03-089d-4540-9ce7-f68263447b90-kube-api-access-jtwkd\") pod \"kindnet-72r8q\" (UID: \"48530d03-089d-4540-9ce7-f68263447b90\") " pod="kube-system/kindnet-72r8q"
	Dec 12 20:05:14 pause-243084 kubelet[1340]: I1212 20:05:14.615090    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llplf\" (UniqueName: \"kubernetes.io/projected/c3bd36ee-1911-4523-aef2-cd8738331b50-kube-api-access-llplf\") pod \"kube-proxy-768fz\" (UID: \"c3bd36ee-1911-4523-aef2-cd8738331b50\") " pod="kube-system/kube-proxy-768fz"
	Dec 12 20:05:15 pause-243084 kubelet[1340]: I1212 20:05:15.549742    1340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-72r8q" podStartSLOduration=1.549722102 podStartE2EDuration="1.549722102s" podCreationTimestamp="2025-12-12 20:05:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 20:05:15.549478949 +0000 UTC m=+6.166574136" watchObservedRunningTime="2025-12-12 20:05:15.549722102 +0000 UTC m=+6.166817291"
	Dec 12 20:05:17 pause-243084 kubelet[1340]: I1212 20:05:17.230449    1340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-768fz" podStartSLOduration=3.230428557 podStartE2EDuration="3.230428557s" podCreationTimestamp="2025-12-12 20:05:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 20:05:15.558758974 +0000 UTC m=+6.175854161" watchObservedRunningTime="2025-12-12 20:05:17.230428557 +0000 UTC m=+7.847523744"
	Dec 12 20:05:25 pause-243084 kubelet[1340]: I1212 20:05:25.584201    1340 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 12 20:05:25 pause-243084 kubelet[1340]: I1212 20:05:25.693065    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92ace16c-9772-4122-ae25-c98ba185316c-config-volume\") pod \"coredns-66bc5c9577-4dbtr\" (UID: \"92ace16c-9772-4122-ae25-c98ba185316c\") " pod="kube-system/coredns-66bc5c9577-4dbtr"
	Dec 12 20:05:25 pause-243084 kubelet[1340]: I1212 20:05:25.693119    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gtx4\" (UniqueName: \"kubernetes.io/projected/92ace16c-9772-4122-ae25-c98ba185316c-kube-api-access-2gtx4\") pod \"coredns-66bc5c9577-4dbtr\" (UID: \"92ace16c-9772-4122-ae25-c98ba185316c\") " pod="kube-system/coredns-66bc5c9577-4dbtr"
	Dec 12 20:05:26 pause-243084 kubelet[1340]: I1212 20:05:26.578717    1340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4dbtr" podStartSLOduration=12.57869528 podStartE2EDuration="12.57869528s" podCreationTimestamp="2025-12-12 20:05:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 20:05:26.578555768 +0000 UTC m=+17.195650965" watchObservedRunningTime="2025-12-12 20:05:26.57869528 +0000 UTC m=+17.195790467"
	Dec 12 20:05:35 pause-243084 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 12 20:05:35 pause-243084 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 12 20:05:35 pause-243084 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:05:35 pause-243084 systemd[1]: kubelet.service: Consumed 1.115s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-243084 -n pause-243084
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-243084 -n pause-243084: exit status 2 (326.090951ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-243084 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-243084
helpers_test.go:244: (dbg) docker inspect pause-243084:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f5dbceb575f003ef2f5f9090af4bc08032a1fe5b5c0bebc2032d1c4e33b8f228",
	        "Created": "2025-12-12T20:04:51.034588781Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 222280,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T20:04:51.064524552Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ca69fb46d667873e297ff8975852d6be818eb624529e750e2b09ff0d3b0c367",
	        "ResolvConfPath": "/var/lib/docker/containers/f5dbceb575f003ef2f5f9090af4bc08032a1fe5b5c0bebc2032d1c4e33b8f228/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f5dbceb575f003ef2f5f9090af4bc08032a1fe5b5c0bebc2032d1c4e33b8f228/hostname",
	        "HostsPath": "/var/lib/docker/containers/f5dbceb575f003ef2f5f9090af4bc08032a1fe5b5c0bebc2032d1c4e33b8f228/hosts",
	        "LogPath": "/var/lib/docker/containers/f5dbceb575f003ef2f5f9090af4bc08032a1fe5b5c0bebc2032d1c4e33b8f228/f5dbceb575f003ef2f5f9090af4bc08032a1fe5b5c0bebc2032d1c4e33b8f228-json.log",
	        "Name": "/pause-243084",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-243084:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-243084",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f5dbceb575f003ef2f5f9090af4bc08032a1fe5b5c0bebc2032d1c4e33b8f228",
	                "LowerDir": "/var/lib/docker/overlay2/917b47d0f509407dff22b3e08064817b2d53db5d63951512c2439820693214c0-init/diff:/var/lib/docker/overlay2/87d8f1d6be832394c20ec55eb084b3f4a084ca786856584bd9e90cdb45432d3c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/917b47d0f509407dff22b3e08064817b2d53db5d63951512c2439820693214c0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/917b47d0f509407dff22b3e08064817b2d53db5d63951512c2439820693214c0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/917b47d0f509407dff22b3e08064817b2d53db5d63951512c2439820693214c0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-243084",
	                "Source": "/var/lib/docker/volumes/pause-243084/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-243084",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-243084",
	                "name.minikube.sigs.k8s.io": "pause-243084",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "6689403c076ffea9330dc333b31434dbdcb967b7b51a14d2de152fe1ca429278",
	            "SandboxKey": "/var/run/docker/netns/6689403c076f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33024"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33025"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33028"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33026"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33027"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-243084": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "26c9576ee4f54a6be448aa8018825671d731a8a47aeaf641eb7475f1c232d040",
	                    "EndpointID": "40b14d44a1b632fa467bceaf39b056e674ecb2d173135186020f8eaf77a27be5",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "be:d1:85:56:ef:93",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-243084",
	                        "f5dbceb575f0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-243084 -n pause-243084
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-243084 -n pause-243084: exit status 2 (337.342397ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-243084 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-243084 logs -n 25: (2.785695422s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                           ARGS                                                                                                            │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-789448 sudo cri-dockerd --version                                                                                                                                                                               │ cilium-789448             │ jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │                     │
	│ ssh     │ -p cilium-789448 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                 │ cilium-789448             │ jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │                     │
	│ ssh     │ -p cilium-789448 sudo systemctl cat containerd --no-pager                                                                                                                                                                 │ cilium-789448             │ jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │                     │
	│ ssh     │ -p cilium-789448 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                          │ cilium-789448             │ jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │                     │
	│ ssh     │ -p cilium-789448 sudo cat /etc/containerd/config.toml                                                                                                                                                                     │ cilium-789448             │ jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │                     │
	│ ssh     │ -p cilium-789448 sudo containerd config dump                                                                                                                                                                              │ cilium-789448             │ jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │                     │
	│ ssh     │ -p cilium-789448 sudo systemctl status crio --all --full --no-pager                                                                                                                                                       │ cilium-789448             │ jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │                     │
	│ ssh     │ -p cilium-789448 sudo systemctl cat crio --no-pager                                                                                                                                                                       │ cilium-789448             │ jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │                     │
	│ ssh     │ -p cilium-789448 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                             │ cilium-789448             │ jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │                     │
	│ ssh     │ -p cilium-789448 sudo crio config                                                                                                                                                                                         │ cilium-789448             │ jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │                     │
	│ delete  │ -p cilium-789448                                                                                                                                                                                                          │ cilium-789448             │ jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ stop    │ -p NoKubernetes-562130                                                                                                                                                                                                    │ NoKubernetes-562130       │ jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ start   │ -p pause-243084 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                                                                                                 │ pause-243084              │ jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:05 UTC │
	│ start   │ -p NoKubernetes-562130 --driver=docker  --container-runtime=crio                                                                                                                                                          │ NoKubernetes-562130       │ jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ ssh     │ -p NoKubernetes-562130 sudo systemctl is-active --quiet service kubelet                                                                                                                                                   │ NoKubernetes-562130       │ jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │                     │
	│ delete  │ -p NoKubernetes-562130                                                                                                                                                                                                    │ NoKubernetes-562130       │ jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ start   │ -p cert-expiration-070436 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                    │ cert-expiration-070436    │ jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:05 UTC │
	│ delete  │ -p force-systemd-env-361023                                                                                                                                                                                               │ force-systemd-env-361023  │ jenkins │ v1.37.0 │ 12 Dec 25 20:05 UTC │ 12 Dec 25 20:05 UTC │
	│ start   │ -p cert-options-427408 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio │ cert-options-427408       │ jenkins │ v1.37.0 │ 12 Dec 25 20:05 UTC │ 12 Dec 25 20:05 UTC │
	│ start   │ -p pause-243084 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                          │ pause-243084              │ jenkins │ v1.37.0 │ 12 Dec 25 20:05 UTC │ 12 Dec 25 20:05 UTC │
	│ ssh     │ cert-options-427408 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                               │ cert-options-427408       │ jenkins │ v1.37.0 │ 12 Dec 25 20:05 UTC │ 12 Dec 25 20:05 UTC │
	│ ssh     │ -p cert-options-427408 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                             │ cert-options-427408       │ jenkins │ v1.37.0 │ 12 Dec 25 20:05 UTC │ 12 Dec 25 20:05 UTC │
	│ delete  │ -p cert-options-427408                                                                                                                                                                                                    │ cert-options-427408       │ jenkins │ v1.37.0 │ 12 Dec 25 20:05 UTC │ 12 Dec 25 20:05 UTC │
	│ pause   │ -p pause-243084 --alsologtostderr -v=5                                                                                                                                                                                    │ pause-243084              │ jenkins │ v1.37.0 │ 12 Dec 25 20:05 UTC │                     │
	│ start   │ -p kubernetes-upgrade-991615 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                  │ kubernetes-upgrade-991615 │ jenkins │ v1.37.0 │ 12 Dec 25 20:05 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:05:34
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:05:34.913520  234824 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:05:34.913780  234824 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:05:34.913798  234824 out.go:374] Setting ErrFile to fd 2...
	I1212 20:05:34.913806  234824 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:05:34.913994  234824 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 20:05:34.914482  234824 out.go:368] Setting JSON to false
	I1212 20:05:34.915598  234824 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2882,"bootTime":1765567053,"procs":369,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 20:05:34.915646  234824 start.go:143] virtualization: kvm guest
	I1212 20:05:34.917514  234824 out.go:179] * [kubernetes-upgrade-991615] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 20:05:34.918664  234824 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:05:34.918662  234824 notify.go:221] Checking for updates...
	I1212 20:05:34.920952  234824 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:05:34.922149  234824 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 20:05:34.923067  234824 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-5703/.minikube
	I1212 20:05:34.924072  234824 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 20:05:34.925014  234824 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:05:34.927252  234824 config.go:182] Loaded profile config "cert-expiration-070436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:05:34.927385  234824 config.go:182] Loaded profile config "pause-243084": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:05:34.927457  234824 config.go:182] Loaded profile config "running-upgrade-569692": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1212 20:05:34.927548  234824 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:05:34.953423  234824 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1212 20:05:34.953538  234824 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:05:35.019504  234824 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-12 20:05:35.003688853 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 20:05:35.019654  234824 docker.go:319] overlay module found
	I1212 20:05:35.023072  234824 out.go:179] * Using the docker driver based on user configuration
	I1212 20:05:35.024461  234824 start.go:309] selected driver: docker
	I1212 20:05:35.024476  234824 start.go:927] validating driver "docker" against <nil>
	I1212 20:05:35.024491  234824 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:05:35.025066  234824 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:05:35.088074  234824 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-12 20:05:35.077725997 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 20:05:35.088220  234824 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 20:05:35.088463  234824 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1212 20:05:35.091151  234824 out.go:179] * Using Docker driver with root privileges
	I1212 20:05:35.092561  234824 cni.go:84] Creating CNI manager for ""
	I1212 20:05:35.092632  234824 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:05:35.092646  234824 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 20:05:35.092721  234824 start.go:353] cluster config:
	{Name:kubernetes-upgrade-991615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-991615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:05:35.093968  234824 out.go:179] * Starting "kubernetes-upgrade-991615" primary control-plane node in "kubernetes-upgrade-991615" cluster
	I1212 20:05:35.095053  234824 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 20:05:35.096329  234824 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 20:05:35.097522  234824 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1212 20:05:35.097560  234824 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1212 20:05:35.097569  234824 cache.go:65] Caching tarball of preloaded images
	I1212 20:05:35.097597  234824 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 20:05:35.097669  234824 preload.go:238] Found /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 20:05:35.097686  234824 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1212 20:05:35.097801  234824 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/kubernetes-upgrade-991615/config.json ...
	I1212 20:05:35.097829  234824 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/kubernetes-upgrade-991615/config.json: {Name:mk35618864cddf3d958c4288d850c66d3bd18191 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:05:35.119905  234824 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 20:05:35.119927  234824 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 20:05:35.119944  234824 cache.go:243] Successfully downloaded all kic artifacts
	I1212 20:05:35.119981  234824 start.go:360] acquireMachinesLock for kubernetes-upgrade-991615: {Name:mk12602d3a1a2f0e7a43419f47822e8142f67bb4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:05:35.120093  234824 start.go:364] duration metric: took 88.502µs to acquireMachinesLock for "kubernetes-upgrade-991615"
	I1212 20:05:35.120123  234824 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-991615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-991615 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:05:35.120216  234824 start.go:125] createHost starting for "" (driver="docker")
	I1212 20:05:35.009394  204508 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1212 20:05:35.009812  204508 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1212 20:05:35.009876  204508 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:05:35.009927  204508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:05:35.053063  204508 cri.go:89] found id: "22461868875c1b70fb82a805bcad4f6ae38269d50c2d4df7dec4d1cbbb836cab"
	I1212 20:05:35.053086  204508 cri.go:89] found id: ""
	I1212 20:05:35.053095  204508 logs.go:282] 1 containers: [22461868875c1b70fb82a805bcad4f6ae38269d50c2d4df7dec4d1cbbb836cab]
	I1212 20:05:35.053306  204508 ssh_runner.go:195] Run: which crictl
	I1212 20:05:35.058597  204508 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:05:35.058666  204508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:05:35.097589  204508 cri.go:89] found id: "2379f4a421e91272a8be476c2c666bf2ad2e55475b4065f9fac1d2a46c2ea8ee"
	I1212 20:05:35.097611  204508 cri.go:89] found id: ""
	I1212 20:05:35.097621  204508 logs.go:282] 1 containers: [2379f4a421e91272a8be476c2c666bf2ad2e55475b4065f9fac1d2a46c2ea8ee]
	I1212 20:05:35.097676  204508 ssh_runner.go:195] Run: which crictl
	I1212 20:05:35.101908  204508 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:05:35.101966  204508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:05:35.140854  204508 cri.go:89] found id: ""
	I1212 20:05:35.140880  204508 logs.go:282] 0 containers: []
	W1212 20:05:35.140890  204508 logs.go:284] No container was found matching "coredns"
	I1212 20:05:35.140898  204508 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:05:35.140955  204508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:05:35.184949  204508 cri.go:89] found id: "0ffb29dfe57e5ff61300fa7a939fcaa8a4599d744528b001ae4103347d0fab85"
	I1212 20:05:35.184969  204508 cri.go:89] found id: "5624ed06b8a09c4d41b11d4864b66f3927ea8453624bfa378e6eb2098372f05b"
	I1212 20:05:35.184975  204508 cri.go:89] found id: ""
	I1212 20:05:35.184984  204508 logs.go:282] 2 containers: [0ffb29dfe57e5ff61300fa7a939fcaa8a4599d744528b001ae4103347d0fab85 5624ed06b8a09c4d41b11d4864b66f3927ea8453624bfa378e6eb2098372f05b]
	I1212 20:05:35.185025  204508 ssh_runner.go:195] Run: which crictl
	I1212 20:05:35.189557  204508 ssh_runner.go:195] Run: which crictl
	I1212 20:05:35.193892  204508 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:05:35.193954  204508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:05:35.234043  204508 cri.go:89] found id: "dce098130adfed62559b28f2ad04e57515f8e48ab551f88090a2954127994ab4"
	I1212 20:05:35.234061  204508 cri.go:89] found id: "1669c762050d5f39da0a02c23a66db59a537152ae5ba7e66c195cc2ba083d720"
	I1212 20:05:35.234065  204508 cri.go:89] found id: ""
	I1212 20:05:35.234072  204508 logs.go:282] 2 containers: [dce098130adfed62559b28f2ad04e57515f8e48ab551f88090a2954127994ab4 1669c762050d5f39da0a02c23a66db59a537152ae5ba7e66c195cc2ba083d720]
	I1212 20:05:35.234114  204508 ssh_runner.go:195] Run: which crictl
	I1212 20:05:35.238136  204508 ssh_runner.go:195] Run: which crictl
	I1212 20:05:35.241664  204508 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:05:35.241724  204508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:05:35.281093  204508 cri.go:89] found id: "b96b61af46e20d05075dd1dc84b89e6dcbd51e51bf5050bd46e2c23881276a74"
	I1212 20:05:35.281115  204508 cri.go:89] found id: ""
	I1212 20:05:35.281125  204508 logs.go:282] 1 containers: [b96b61af46e20d05075dd1dc84b89e6dcbd51e51bf5050bd46e2c23881276a74]
	I1212 20:05:35.281179  204508 ssh_runner.go:195] Run: which crictl
	I1212 20:05:35.285148  204508 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:05:35.285203  204508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:05:35.322982  204508 cri.go:89] found id: "00761b3e7a3412b22bb8c7778689af6185e7922e0f363ea33ea0a05fca43d5bb"
	I1212 20:05:35.323005  204508 cri.go:89] found id: "def73edbd1c6c36266ea28c0288ab99e4d9e7f4f22dbdc3e54b8f3435337bf67"
	I1212 20:05:35.323010  204508 cri.go:89] found id: ""
	I1212 20:05:35.323018  204508 logs.go:282] 2 containers: [00761b3e7a3412b22bb8c7778689af6185e7922e0f363ea33ea0a05fca43d5bb def73edbd1c6c36266ea28c0288ab99e4d9e7f4f22dbdc3e54b8f3435337bf67]
	I1212 20:05:35.323065  204508 ssh_runner.go:195] Run: which crictl
	I1212 20:05:35.326737  204508 ssh_runner.go:195] Run: which crictl
	I1212 20:05:35.330235  204508 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:05:35.330301  204508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:05:35.366108  204508 cri.go:89] found id: "4b50c0555bfefd63c31d2edbb45749358e342bc3fd9f4331ce4313875eca65ca"
	I1212 20:05:35.366129  204508 cri.go:89] found id: ""
	I1212 20:05:35.366136  204508 logs.go:282] 1 containers: [4b50c0555bfefd63c31d2edbb45749358e342bc3fd9f4331ce4313875eca65ca]
	I1212 20:05:35.366184  204508 ssh_runner.go:195] Run: which crictl
	I1212 20:05:35.370053  204508 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:05:35.370084  204508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:05:35.449045  204508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:05:35.449092  204508 logs.go:123] Gathering logs for kube-apiserver [22461868875c1b70fb82a805bcad4f6ae38269d50c2d4df7dec4d1cbbb836cab] ...
	I1212 20:05:35.449128  204508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22461868875c1b70fb82a805bcad4f6ae38269d50c2d4df7dec4d1cbbb836cab"
	I1212 20:05:35.491791  204508 logs.go:123] Gathering logs for kube-scheduler [0ffb29dfe57e5ff61300fa7a939fcaa8a4599d744528b001ae4103347d0fab85] ...
	I1212 20:05:35.491821  204508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ffb29dfe57e5ff61300fa7a939fcaa8a4599d744528b001ae4103347d0fab85"
	I1212 20:05:35.575386  204508 logs.go:123] Gathering logs for kube-controller-manager [b96b61af46e20d05075dd1dc84b89e6dcbd51e51bf5050bd46e2c23881276a74] ...
	I1212 20:05:35.575442  204508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b96b61af46e20d05075dd1dc84b89e6dcbd51e51bf5050bd46e2c23881276a74"
	I1212 20:05:35.618255  204508 logs.go:123] Gathering logs for kindnet [00761b3e7a3412b22bb8c7778689af6185e7922e0f363ea33ea0a05fca43d5bb] ...
	I1212 20:05:35.618296  204508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 00761b3e7a3412b22bb8c7778689af6185e7922e0f363ea33ea0a05fca43d5bb"
	I1212 20:05:35.667372  204508 logs.go:123] Gathering logs for kindnet [def73edbd1c6c36266ea28c0288ab99e4d9e7f4f22dbdc3e54b8f3435337bf67] ...
	I1212 20:05:35.667404  204508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def73edbd1c6c36266ea28c0288ab99e4d9e7f4f22dbdc3e54b8f3435337bf67"
	I1212 20:05:35.709340  204508 logs.go:123] Gathering logs for kubelet ...
	I1212 20:05:35.709364  204508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:05:35.806246  204508 logs.go:123] Gathering logs for kube-scheduler [5624ed06b8a09c4d41b11d4864b66f3927ea8453624bfa378e6eb2098372f05b] ...
	I1212 20:05:35.806293  204508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5624ed06b8a09c4d41b11d4864b66f3927ea8453624bfa378e6eb2098372f05b"
	I1212 20:05:35.854182  204508 logs.go:123] Gathering logs for kube-proxy [1669c762050d5f39da0a02c23a66db59a537152ae5ba7e66c195cc2ba083d720] ...
	I1212 20:05:35.854210  204508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1669c762050d5f39da0a02c23a66db59a537152ae5ba7e66c195cc2ba083d720"
	I1212 20:05:35.893102  204508 logs.go:123] Gathering logs for container status ...
	I1212 20:05:35.893141  204508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:05:35.939161  204508 logs.go:123] Gathering logs for dmesg ...
	I1212 20:05:35.939188  204508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:05:35.957231  204508 logs.go:123] Gathering logs for etcd [2379f4a421e91272a8be476c2c666bf2ad2e55475b4065f9fac1d2a46c2ea8ee] ...
	I1212 20:05:35.957257  204508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2379f4a421e91272a8be476c2c666bf2ad2e55475b4065f9fac1d2a46c2ea8ee"
	I1212 20:05:35.999063  204508 logs.go:123] Gathering logs for kube-proxy [dce098130adfed62559b28f2ad04e57515f8e48ab551f88090a2954127994ab4] ...
	I1212 20:05:35.999094  204508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dce098130adfed62559b28f2ad04e57515f8e48ab551f88090a2954127994ab4"
	I1212 20:05:36.053244  204508 logs.go:123] Gathering logs for storage-provisioner [4b50c0555bfefd63c31d2edbb45749358e342bc3fd9f4331ce4313875eca65ca] ...
	I1212 20:05:36.053293  204508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b50c0555bfefd63c31d2edbb45749358e342bc3fd9f4331ce4313875eca65ca"
	I1212 20:05:36.094045  204508 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:05:36.094080  204508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	
	
	==> CRI-O <==
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.452163454Z" level=info msg="RDT not available in the host system"
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.452172752Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.452933415Z" level=info msg="Conmon does support the --sync option"
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.452951339Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.452964284Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.453714642Z" level=info msg="Conmon does support the --sync option"
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.453728564Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.457987479Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.458007423Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.458731569Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \
"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nr
i]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.459259613Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.459331098Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.540951152Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-4dbtr Namespace:kube-system ID:5a85b88d900d4eb155b5f64d7fefe4d27f91bdd536b9dcf0dfa287ac4fe1edbb UID:92ace16c-9772-4122-ae25-c98ba185316c NetNS:/var/run/netns/c74105f9-3498-416d-a089-c7632b773b2b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000902180}] Aliases:map[]}"
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.54111227Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-4dbtr for CNI network kindnet (type=ptp)"
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.541504972Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.541525499Z" level=info msg="Starting seccomp notifier watcher"
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.541567075Z" level=info msg="Create NRI interface"
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.541670365Z" level=info msg="built-in NRI default validator is disabled"
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.541684432Z" level=info msg="runtime interface created"
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.541697472Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.54170522Z" level=info msg="runtime interface starting up..."
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.541712289Z" level=info msg="starting plugins..."
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.54172617Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 12 20:05:31 pause-243084 crio[2201]: time="2025-12-12T20:05:31.542009086Z" level=info msg="No systemd watchdog enabled"
	Dec 12 20:05:31 pause-243084 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	e8f370c2d2aa6       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   13 seconds ago      Running             coredns                   0                   5a85b88d900d4       coredns-66bc5c9577-4dbtr               kube-system
	1f4f219d24da1       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   24 seconds ago      Running             kindnet-cni               0                   b6b2a5bab5f1f       kindnet-72r8q                          kube-system
	0d758315dc8b6       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   24 seconds ago      Running             kube-proxy                0                   65da3b33429fe       kube-proxy-768fz                       kube-system
	1546a453b9b14       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   35 seconds ago      Running             kube-apiserver            0                   46f333a596886       kube-apiserver-pause-243084            kube-system
	a7198e0776a35       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   35 seconds ago      Running             kube-scheduler            0                   b7fe9b3ef273a       kube-scheduler-pause-243084            kube-system
	47d1914f1d154       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   35 seconds ago      Running             etcd                      0                   777e150d8ef18       etcd-pause-243084                      kube-system
	a1e300479c0a8       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   35 seconds ago      Running             kube-controller-manager   0                   05e554299def8       kube-controller-manager-pause-243084   kube-system
	
	
	==> coredns [e8f370c2d2aa697861a24de574a97463dd209390cdae8a867d94358874809cb8] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41056 - 38118 "HINFO IN 8470862106684823630.6881257276891624079. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.11987632s
	
	
	==> describe nodes <==
	Name:               pause-243084
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-243084
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300
	                    minikube.k8s.io/name=pause-243084
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T20_05_10_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 20:05:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-243084
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 20:05:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 20:05:25 +0000   Fri, 12 Dec 2025 20:05:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 20:05:25 +0000   Fri, 12 Dec 2025 20:05:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 20:05:25 +0000   Fri, 12 Dec 2025 20:05:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 20:05:25 +0000   Fri, 12 Dec 2025 20:05:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    pause-243084
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c9831a2813dcaeb3cc17b596693b7bac
	  System UUID:                5e6eac7e-2836-4b23-bbfb-bfc6ae4d214a
	  Boot ID:                    50e046d9-33b0-4107-918f-f0a8d9513b10
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-4dbtr                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-pause-243084                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-72r8q                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-pause-243084             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-pause-243084    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-768fz                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-pause-243084             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 25s   kube-proxy       
	  Normal  Starting                 31s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s   kubelet          Node pause-243084 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s   kubelet          Node pause-243084 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s   kubelet          Node pause-243084 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s   node-controller  Node pause-243084 event: Registered Node pause-243084 in Controller
	  Normal  NodeReady                15s   kubelet          Node pause-243084 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.086327] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026088] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.855140] kauditd_printk_skb: 47 callbacks suppressed
	[Dec12 19:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.016395] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023890] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023861] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023912] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023894] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +2.047754] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +4.031589] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +8.448131] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[Dec12 19:32] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[ +32.252714] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	
	
	==> etcd [47d1914f1d154a7648d210a0ac3121cdfee31a410a61529ef122381bd3ee2fe4] <==
	{"level":"warn","ts":"2025-12-12T20:05:06.264012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:05:06.270915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:05:06.277001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:05:06.283718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:05:06.291062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:05:06.297409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:05:06.304451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:05:06.311071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:05:06.317681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:05:06.324788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:05:06.330654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:05:06.347389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:05:06.354664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:05:06.360549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:05:06.410597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41316","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-12T20:05:11.787085Z","caller":"traceutil/trace.go:172","msg":"trace[939889435] transaction","detail":"{read_only:false; response_revision:284; number_of_response:1; }","duration":"119.850639ms","start":"2025-12-12T20:05:11.667219Z","end":"2025-12-12T20:05:11.787069Z","steps":["trace[939889435] 'process raft request'  (duration: 119.757639ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T20:05:11.947369Z","caller":"traceutil/trace.go:172","msg":"trace[259852444] transaction","detail":"{read_only:false; response_revision:285; number_of_response:1; }","duration":"130.253886ms","start":"2025-12-12T20:05:11.817102Z","end":"2025-12-12T20:05:11.947356Z","steps":["trace[259852444] 'process raft request'  (duration: 97.138624ms)","trace[259852444] 'compare'  (duration: 33.010124ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-12T20:05:12.463870Z","caller":"traceutil/trace.go:172","msg":"trace[532529038] transaction","detail":"{read_only:false; response_revision:287; number_of_response:1; }","duration":"193.785731ms","start":"2025-12-12T20:05:12.270067Z","end":"2025-12-12T20:05:12.463853Z","steps":["trace[532529038] 'process raft request'  (duration: 193.698055ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-12T20:05:12.591734Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.881282ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/ttl-controller\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-12T20:05:12.591811Z","caller":"traceutil/trace.go:172","msg":"trace[468229782] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/ttl-controller; range_end:; response_count:0; response_revision:287; }","duration":"124.979127ms","start":"2025-12-12T20:05:12.466817Z","end":"2025-12-12T20:05:12.591796Z","steps":["trace[468229782] 'range keys from in-memory index tree'  (duration: 124.811639ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T20:05:12.718944Z","caller":"traceutil/trace.go:172","msg":"trace[47314688] linearizableReadLoop","detail":"{readStateIndex:298; appliedIndex:298; }","duration":"108.472266ms","start":"2025-12-12T20:05:12.610449Z","end":"2025-12-12T20:05:12.718921Z","steps":["trace[47314688] 'read index received'  (duration: 108.466626ms)","trace[47314688] 'applied index is now lower than readState.Index'  (duration: 4.949µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-12T20:05:12.719059Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"108.594673ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-12T20:05:12.719090Z","caller":"traceutil/trace.go:172","msg":"trace[1987585637] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:0; response_revision:287; }","duration":"108.641823ms","start":"2025-12-12T20:05:12.610440Z","end":"2025-12-12T20:05:12.719082Z","steps":["trace[1987585637] 'agreement among raft nodes before linearized reading'  (duration: 108.55927ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T20:05:12.719100Z","caller":"traceutil/trace.go:172","msg":"trace[1975015979] transaction","detail":"{read_only:false; response_revision:288; number_of_response:1; }","duration":"122.32924ms","start":"2025-12-12T20:05:12.596758Z","end":"2025-12-12T20:05:12.719087Z","steps":["trace[1975015979] 'process raft request'  (duration: 122.234344ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T20:05:38.510962Z","caller":"traceutil/trace.go:172","msg":"trace[1629479774] transaction","detail":"{read_only:false; response_revision:403; number_of_response:1; }","duration":"127.569391ms","start":"2025-12-12T20:05:38.383377Z","end":"2025-12-12T20:05:38.510946Z","steps":["trace[1629479774] 'process raft request'  (duration: 63.19443ms)","trace[1629479774] 'compare'  (duration: 64.294333ms)"],"step_count":2}
	
	
	==> kernel <==
	 20:05:41 up 48 min,  0 user,  load average: 3.17, 2.00, 1.35
	Linux pause-243084 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1f4f219d24da1cb0e1ed88f1802750597f260740eb422b534193439a1fa35e5e] <==
	I1212 20:05:15.157063       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1212 20:05:15.159382       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1212 20:05:15.159542       1 main.go:148] setting mtu 1500 for CNI 
	I1212 20:05:15.159557       1 main.go:178] kindnetd IP family: "ipv4"
	I1212 20:05:15.159583       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-12T20:05:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1212 20:05:15.359983       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1212 20:05:15.360010       1 controller.go:381] "Waiting for informer caches to sync"
	I1212 20:05:15.360021       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1212 20:05:15.361027       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1212 20:05:15.854929       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1212 20:05:15.854970       1 metrics.go:72] Registering metrics
	I1212 20:05:15.855089       1 controller.go:711] "Syncing nftables rules"
	I1212 20:05:25.362381       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1212 20:05:25.362448       1 main.go:301] handling current node
	I1212 20:05:35.367406       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1212 20:05:35.367443       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1546a453b9b14d1a23439d71cf0e13e59110c6e9fccec6b1fd602c89ff0a23f7] <==
	I1212 20:05:06.842541       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1212 20:05:06.842594       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1212 20:05:06.844185       1 controller.go:667] quota admission added evaluator for: namespaces
	I1212 20:05:06.850780       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 20:05:06.852012       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1212 20:05:06.857555       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 20:05:06.857721       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1212 20:05:07.026064       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 20:05:07.746956       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1212 20:05:07.750629       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1212 20:05:07.750643       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 20:05:08.250154       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 20:05:08.286423       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 20:05:08.350457       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1212 20:05:08.356214       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1212 20:05:08.357469       1 controller.go:667] quota admission added evaluator for: endpoints
	I1212 20:05:08.361301       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 20:05:08.772165       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1212 20:05:09.600553       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1212 20:05:09.609386       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1212 20:05:09.620135       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1212 20:05:14.523045       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1212 20:05:14.673634       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 20:05:14.676706       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 20:05:14.770944       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [a1e300479c0a86bae02ef355d09dbdd9889387d9785197a8a19ae54bd52c13c9] <==
	I1212 20:05:13.766209       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1212 20:05:13.769497       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1212 20:05:13.769611       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1212 20:05:13.769622       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1212 20:05:13.770042       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1212 20:05:13.770059       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1212 20:05:13.770067       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1212 20:05:13.770489       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1212 20:05:13.770515       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1212 20:05:13.770607       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1212 20:05:13.770695       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-243084"
	I1212 20:05:13.770754       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1212 20:05:13.770389       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1212 20:05:13.771156       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1212 20:05:13.771297       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1212 20:05:13.772860       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1212 20:05:13.776584       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1212 20:05:13.776775       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1212 20:05:13.777231       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 20:05:13.782708       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1212 20:05:13.788981       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1212 20:05:13.791259       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1212 20:05:13.793474       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1212 20:05:13.796851       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1212 20:05:28.772716       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [0d758315dc8b6433c0715bcbe02b1b392f42622af50a9126615d035b81a7334a] <==
	I1212 20:05:14.932519       1 server_linux.go:53] "Using iptables proxy"
	I1212 20:05:15.010953       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1212 20:05:15.111509       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1212 20:05:15.111548       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1212 20:05:15.111629       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 20:05:15.132326       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 20:05:15.132381       1 server_linux.go:132] "Using iptables Proxier"
	I1212 20:05:15.138027       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 20:05:15.138594       1 server.go:527] "Version info" version="v1.34.2"
	I1212 20:05:15.138625       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 20:05:15.141952       1 config.go:200] "Starting service config controller"
	I1212 20:05:15.141981       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 20:05:15.142016       1 config.go:106] "Starting endpoint slice config controller"
	I1212 20:05:15.142022       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 20:05:15.142036       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 20:05:15.142042       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 20:05:15.142162       1 config.go:309] "Starting node config controller"
	I1212 20:05:15.142179       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 20:05:15.142188       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 20:05:15.242155       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1212 20:05:15.242177       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1212 20:05:15.242177       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a7198e0776a35180f558414147085cb0f0bcb58673b28c2b7c096805999ac9d4] <==
	E1212 20:05:06.792385       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1212 20:05:06.792391       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1212 20:05:06.792429       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1212 20:05:06.792444       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1212 20:05:06.792457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1212 20:05:06.792478       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1212 20:05:06.792506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1212 20:05:06.792527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1212 20:05:06.792572       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1212 20:05:06.792572       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1212 20:05:06.792670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1212 20:05:06.792675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1212 20:05:07.666751       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1212 20:05:07.722680       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1212 20:05:07.751782       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1212 20:05:07.778885       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1212 20:05:07.804868       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1212 20:05:07.811849       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1212 20:05:07.852089       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1212 20:05:07.962700       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1212 20:05:07.963851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1212 20:05:08.041226       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1212 20:05:08.043044       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1212 20:05:08.060741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1212 20:05:09.489308       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 12 20:05:10 pause-243084 kubelet[1340]: E1212 20:05:10.537363    1340 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-243084\" already exists" pod="kube-system/kube-apiserver-pause-243084"
	Dec 12 20:05:10 pause-243084 kubelet[1340]: E1212 20:05:10.539241    1340 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-243084\" already exists" pod="kube-system/kube-scheduler-pause-243084"
	Dec 12 20:05:10 pause-243084 kubelet[1340]: I1212 20:05:10.539300    1340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-243084" podStartSLOduration=1.53925662 podStartE2EDuration="1.53925662s" podCreationTimestamp="2025-12-12 20:05:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 20:05:10.525839402 +0000 UTC m=+1.142934591" watchObservedRunningTime="2025-12-12 20:05:10.53925662 +0000 UTC m=+1.156351808"
	Dec 12 20:05:10 pause-243084 kubelet[1340]: E1212 20:05:10.539543    1340 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-243084\" already exists" pod="kube-system/kube-controller-manager-pause-243084"
	Dec 12 20:05:10 pause-243084 kubelet[1340]: I1212 20:05:10.551162    1340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-243084" podStartSLOduration=1.551147287 podStartE2EDuration="1.551147287s" podCreationTimestamp="2025-12-12 20:05:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 20:05:10.539500827 +0000 UTC m=+1.156596014" watchObservedRunningTime="2025-12-12 20:05:10.551147287 +0000 UTC m=+1.168242473"
	Dec 12 20:05:13 pause-243084 kubelet[1340]: I1212 20:05:13.844251    1340 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 12 20:05:13 pause-243084 kubelet[1340]: I1212 20:05:13.845087    1340 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 12 20:05:14 pause-243084 kubelet[1340]: I1212 20:05:14.614790    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/48530d03-089d-4540-9ce7-f68263447b90-lib-modules\") pod \"kindnet-72r8q\" (UID: \"48530d03-089d-4540-9ce7-f68263447b90\") " pod="kube-system/kindnet-72r8q"
	Dec 12 20:05:14 pause-243084 kubelet[1340]: I1212 20:05:14.614839    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c3bd36ee-1911-4523-aef2-cd8738331b50-lib-modules\") pod \"kube-proxy-768fz\" (UID: \"c3bd36ee-1911-4523-aef2-cd8738331b50\") " pod="kube-system/kube-proxy-768fz"
	Dec 12 20:05:14 pause-243084 kubelet[1340]: I1212 20:05:14.614874    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/48530d03-089d-4540-9ce7-f68263447b90-xtables-lock\") pod \"kindnet-72r8q\" (UID: \"48530d03-089d-4540-9ce7-f68263447b90\") " pod="kube-system/kindnet-72r8q"
	Dec 12 20:05:14 pause-243084 kubelet[1340]: I1212 20:05:14.614924    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c3bd36ee-1911-4523-aef2-cd8738331b50-kube-proxy\") pod \"kube-proxy-768fz\" (UID: \"c3bd36ee-1911-4523-aef2-cd8738331b50\") " pod="kube-system/kube-proxy-768fz"
	Dec 12 20:05:14 pause-243084 kubelet[1340]: I1212 20:05:14.614955    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c3bd36ee-1911-4523-aef2-cd8738331b50-xtables-lock\") pod \"kube-proxy-768fz\" (UID: \"c3bd36ee-1911-4523-aef2-cd8738331b50\") " pod="kube-system/kube-proxy-768fz"
	Dec 12 20:05:14 pause-243084 kubelet[1340]: I1212 20:05:14.614987    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/48530d03-089d-4540-9ce7-f68263447b90-cni-cfg\") pod \"kindnet-72r8q\" (UID: \"48530d03-089d-4540-9ce7-f68263447b90\") " pod="kube-system/kindnet-72r8q"
	Dec 12 20:05:14 pause-243084 kubelet[1340]: I1212 20:05:14.615038    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtwkd\" (UniqueName: \"kubernetes.io/projected/48530d03-089d-4540-9ce7-f68263447b90-kube-api-access-jtwkd\") pod \"kindnet-72r8q\" (UID: \"48530d03-089d-4540-9ce7-f68263447b90\") " pod="kube-system/kindnet-72r8q"
	Dec 12 20:05:14 pause-243084 kubelet[1340]: I1212 20:05:14.615090    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llplf\" (UniqueName: \"kubernetes.io/projected/c3bd36ee-1911-4523-aef2-cd8738331b50-kube-api-access-llplf\") pod \"kube-proxy-768fz\" (UID: \"c3bd36ee-1911-4523-aef2-cd8738331b50\") " pod="kube-system/kube-proxy-768fz"
	Dec 12 20:05:15 pause-243084 kubelet[1340]: I1212 20:05:15.549742    1340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-72r8q" podStartSLOduration=1.549722102 podStartE2EDuration="1.549722102s" podCreationTimestamp="2025-12-12 20:05:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 20:05:15.549478949 +0000 UTC m=+6.166574136" watchObservedRunningTime="2025-12-12 20:05:15.549722102 +0000 UTC m=+6.166817291"
	Dec 12 20:05:17 pause-243084 kubelet[1340]: I1212 20:05:17.230449    1340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-768fz" podStartSLOduration=3.230428557 podStartE2EDuration="3.230428557s" podCreationTimestamp="2025-12-12 20:05:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 20:05:15.558758974 +0000 UTC m=+6.175854161" watchObservedRunningTime="2025-12-12 20:05:17.230428557 +0000 UTC m=+7.847523744"
	Dec 12 20:05:25 pause-243084 kubelet[1340]: I1212 20:05:25.584201    1340 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 12 20:05:25 pause-243084 kubelet[1340]: I1212 20:05:25.693065    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92ace16c-9772-4122-ae25-c98ba185316c-config-volume\") pod \"coredns-66bc5c9577-4dbtr\" (UID: \"92ace16c-9772-4122-ae25-c98ba185316c\") " pod="kube-system/coredns-66bc5c9577-4dbtr"
	Dec 12 20:05:25 pause-243084 kubelet[1340]: I1212 20:05:25.693119    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gtx4\" (UniqueName: \"kubernetes.io/projected/92ace16c-9772-4122-ae25-c98ba185316c-kube-api-access-2gtx4\") pod \"coredns-66bc5c9577-4dbtr\" (UID: \"92ace16c-9772-4122-ae25-c98ba185316c\") " pod="kube-system/coredns-66bc5c9577-4dbtr"
	Dec 12 20:05:26 pause-243084 kubelet[1340]: I1212 20:05:26.578717    1340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4dbtr" podStartSLOduration=12.57869528 podStartE2EDuration="12.57869528s" podCreationTimestamp="2025-12-12 20:05:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 20:05:26.578555768 +0000 UTC m=+17.195650965" watchObservedRunningTime="2025-12-12 20:05:26.57869528 +0000 UTC m=+17.195790467"
	Dec 12 20:05:35 pause-243084 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 12 20:05:35 pause-243084 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 12 20:05:35 pause-243084 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:05:35 pause-243084 systemd[1]: kubelet.service: Consumed 1.115s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-243084 -n pause-243084
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-243084 -n pause-243084: exit status 2 (366.824581ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-243084 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-824670 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-824670 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (256.543324ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:09:16Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-824670 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-824670 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-824670 describe deploy/metrics-server -n kube-system: exit status 1 (57.271577ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-824670 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-824670
helpers_test.go:244: (dbg) docker inspect old-k8s-version-824670:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5ab927c640d07c092293740e22de0c0f3921727ef65e3c1cdd16508a3906f60f",
	        "Created": "2025-12-12T20:08:24.734370557Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 262004,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T20:08:24.770846758Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ca69fb46d667873e297ff8975852d6be818eb624529e750e2b09ff0d3b0c367",
	        "ResolvConfPath": "/var/lib/docker/containers/5ab927c640d07c092293740e22de0c0f3921727ef65e3c1cdd16508a3906f60f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5ab927c640d07c092293740e22de0c0f3921727ef65e3c1cdd16508a3906f60f/hostname",
	        "HostsPath": "/var/lib/docker/containers/5ab927c640d07c092293740e22de0c0f3921727ef65e3c1cdd16508a3906f60f/hosts",
	        "LogPath": "/var/lib/docker/containers/5ab927c640d07c092293740e22de0c0f3921727ef65e3c1cdd16508a3906f60f/5ab927c640d07c092293740e22de0c0f3921727ef65e3c1cdd16508a3906f60f-json.log",
	        "Name": "/old-k8s-version-824670",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-824670:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-824670",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5ab927c640d07c092293740e22de0c0f3921727ef65e3c1cdd16508a3906f60f",
	                "LowerDir": "/var/lib/docker/overlay2/30c86b0c2116c0f48f8210ea61a7592baf17dd790ff6789c3f29325d1db1d409-init/diff:/var/lib/docker/overlay2/87d8f1d6be832394c20ec55eb084b3f4a084ca786856584bd9e90cdb45432d3c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/30c86b0c2116c0f48f8210ea61a7592baf17dd790ff6789c3f29325d1db1d409/merged",
	                "UpperDir": "/var/lib/docker/overlay2/30c86b0c2116c0f48f8210ea61a7592baf17dd790ff6789c3f29325d1db1d409/diff",
	                "WorkDir": "/var/lib/docker/overlay2/30c86b0c2116c0f48f8210ea61a7592baf17dd790ff6789c3f29325d1db1d409/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-824670",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-824670/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-824670",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-824670",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-824670",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "fe804b8998e510e4fc35b5d7e2f04a28775813cef0580fd890511d15e9d45b26",
	            "SandboxKey": "/var/run/docker/netns/fe804b8998e5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-824670": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "54eba6dc9ad901e89d943167287789d7ba6943774fa37cc0a202f7a86e0bfc9a",
	                    "EndpointID": "c1728d36137f1af825212790bf7157e7b1d3f9bca297e0edf38286367789c7ff",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "32:32:a9:e1:71:88",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-824670",
	                        "5ab927c640d0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-824670 -n old-k8s-version-824670
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-824670 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-824670 logs -n 25: (1.092499781s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ start   │ -p pause-243084 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                                                                                                                     │ pause-243084              │ jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:05 UTC │
	│ start   │ -p NoKubernetes-562130 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-562130       │ jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ ssh     │ -p NoKubernetes-562130 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-562130       │ jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │                     │
	│ delete  │ -p NoKubernetes-562130                                                                                                                                                                                                                        │ NoKubernetes-562130       │ jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ start   │ -p cert-expiration-070436 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-070436    │ jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:05 UTC │
	│ delete  │ -p force-systemd-env-361023                                                                                                                                                                                                                   │ force-systemd-env-361023  │ jenkins │ v1.37.0 │ 12 Dec 25 20:05 UTC │ 12 Dec 25 20:05 UTC │
	│ start   │ -p cert-options-427408 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-427408       │ jenkins │ v1.37.0 │ 12 Dec 25 20:05 UTC │ 12 Dec 25 20:05 UTC │
	│ start   │ -p pause-243084 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                                              │ pause-243084              │ jenkins │ v1.37.0 │ 12 Dec 25 20:05 UTC │ 12 Dec 25 20:05 UTC │
	│ ssh     │ cert-options-427408 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-427408       │ jenkins │ v1.37.0 │ 12 Dec 25 20:05 UTC │ 12 Dec 25 20:05 UTC │
	│ ssh     │ -p cert-options-427408 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-427408       │ jenkins │ v1.37.0 │ 12 Dec 25 20:05 UTC │ 12 Dec 25 20:05 UTC │
	│ delete  │ -p cert-options-427408                                                                                                                                                                                                                        │ cert-options-427408       │ jenkins │ v1.37.0 │ 12 Dec 25 20:05 UTC │ 12 Dec 25 20:05 UTC │
	│ pause   │ -p pause-243084 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-243084              │ jenkins │ v1.37.0 │ 12 Dec 25 20:05 UTC │                     │
	│ start   │ -p kubernetes-upgrade-991615 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-991615 │ jenkins │ v1.37.0 │ 12 Dec 25 20:05 UTC │ 12 Dec 25 20:05 UTC │
	│ delete  │ -p pause-243084                                                                                                                                                                                                                               │ pause-243084              │ jenkins │ v1.37.0 │ 12 Dec 25 20:05 UTC │ 12 Dec 25 20:05 UTC │
	│ start   │ -p stopped-upgrade-180826 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                                                                                                                          │ stopped-upgrade-180826    │ jenkins │ v1.35.0 │ 12 Dec 25 20:05 UTC │ 12 Dec 25 20:06 UTC │
	│ stop    │ -p kubernetes-upgrade-991615                                                                                                                                                                                                                  │ kubernetes-upgrade-991615 │ jenkins │ v1.37.0 │ 12 Dec 25 20:05 UTC │ 12 Dec 25 20:06 UTC │
	│ stop    │ stopped-upgrade-180826 stop                                                                                                                                                                                                                   │ stopped-upgrade-180826    │ jenkins │ v1.35.0 │ 12 Dec 25 20:06 UTC │ 12 Dec 25 20:06 UTC │
	│ start   │ -p stopped-upgrade-180826 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ stopped-upgrade-180826    │ jenkins │ v1.37.0 │ 12 Dec 25 20:06 UTC │                     │
	│ start   │ -p kubernetes-upgrade-991615 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                               │ kubernetes-upgrade-991615 │ jenkins │ v1.37.0 │ 12 Dec 25 20:06 UTC │                     │
	│ delete  │ -p running-upgrade-569692                                                                                                                                                                                                                     │ running-upgrade-569692    │ jenkins │ v1.37.0 │ 12 Dec 25 20:08 UTC │ 12 Dec 25 20:08 UTC │
	│ start   │ -p old-k8s-version-824670 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-824670    │ jenkins │ v1.37.0 │ 12 Dec 25 20:08 UTC │ 12 Dec 25 20:09 UTC │
	│ start   │ -p cert-expiration-070436 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-070436    │ jenkins │ v1.37.0 │ 12 Dec 25 20:08 UTC │ 12 Dec 25 20:08 UTC │
	│ delete  │ -p cert-expiration-070436                                                                                                                                                                                                                     │ cert-expiration-070436    │ jenkins │ v1.37.0 │ 12 Dec 25 20:08 UTC │ 12 Dec 25 20:08 UTC │
	│ start   │ -p no-preload-753103 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-753103         │ jenkins │ v1.37.0 │ 12 Dec 25 20:08 UTC │ 12 Dec 25 20:09 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-824670 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-824670    │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:08:31
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:08:31.121762  265161 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:08:31.121955  265161 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:08:31.121966  265161 out.go:374] Setting ErrFile to fd 2...
	I1212 20:08:31.121973  265161 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:08:31.122187  265161 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 20:08:31.122771  265161 out.go:368] Setting JSON to false
	I1212 20:08:31.124154  265161 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3058,"bootTime":1765567053,"procs":360,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 20:08:31.124216  265161 start.go:143] virtualization: kvm guest
	I1212 20:08:31.126591  265161 out.go:179] * [no-preload-753103] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 20:08:31.127758  265161 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:08:31.127787  265161 notify.go:221] Checking for updates...
	I1212 20:08:31.130186  265161 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:08:31.131499  265161 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 20:08:31.132688  265161 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-5703/.minikube
	I1212 20:08:31.133911  265161 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 20:08:31.137455  265161 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:08:31.138970  265161 config.go:182] Loaded profile config "kubernetes-upgrade-991615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:08:31.139076  265161 config.go:182] Loaded profile config "old-k8s-version-824670": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1212 20:08:31.139143  265161 config.go:182] Loaded profile config "stopped-upgrade-180826": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1212 20:08:31.139228  265161 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:08:31.166766  265161 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1212 20:08:31.166897  265161 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:08:31.218894  265161 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-12 20:08:31.209872955 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 20:08:31.218994  265161 docker.go:319] overlay module found
	I1212 20:08:31.221397  265161 out.go:179] * Using the docker driver based on user configuration
	I1212 20:08:31.222502  265161 start.go:309] selected driver: docker
	I1212 20:08:31.222514  265161 start.go:927] validating driver "docker" against <nil>
	I1212 20:08:31.222525  265161 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:08:31.223046  265161 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:08:31.278338  265161 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-12 20:08:31.268478912 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 20:08:31.278479  265161 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 20:08:31.278674  265161 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 20:08:31.280187  265161 out.go:179] * Using Docker driver with root privileges
	I1212 20:08:31.281231  265161 cni.go:84] Creating CNI manager for ""
	I1212 20:08:31.281308  265161 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:08:31.281320  265161 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 20:08:31.281373  265161 start.go:353] cluster config:
	{Name:no-preload-753103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-753103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:08:31.282485  265161 out.go:179] * Starting "no-preload-753103" primary control-plane node in "no-preload-753103" cluster
	I1212 20:08:31.283344  265161 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 20:08:31.284391  265161 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 20:08:31.285310  265161 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 20:08:31.285385  265161 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 20:08:31.285404  265161 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/config.json ...
	I1212 20:08:31.285426  265161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/config.json: {Name:mkd8a2177844ac0db49bb2822f66a51efdeb8945 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:08:31.285585  265161 cache.go:107] acquiring lock: {Name:mkd03888e9d28c9db065b51c032322735ca0cefa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:08:31.285624  265161 cache.go:107] acquiring lock: {Name:mk459cb9c4c0f7c593fd5037410787d5ad4d4a42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:08:31.285618  265161 cache.go:107] acquiring lock: {Name:mk6749e52897d345dd08e6cd0c23af395805aa99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:08:31.285623  265161 cache.go:107] acquiring lock: {Name:mkbd6b49ab9e482ef9676c3a800f255aea55c704 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:08:31.285710  265161 cache.go:107] acquiring lock: {Name:mk2510ba3b96b784848e2843cf1d744743c7eaf9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:08:31.285755  265161 cache.go:115] /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1212 20:08:31.285746  265161 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1212 20:08:31.285769  265161 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 20:08:31.285768  265161 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 145.537µs
	I1212 20:08:31.285795  265161 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1212 20:08:31.285730  265161 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 20:08:31.285832  265161 cache.go:107] acquiring lock: {Name:mk0a87bae71250db2df2add52f55e5948ddda9b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:08:31.285930  265161 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 20:08:31.285971  265161 cache.go:115] /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1212 20:08:31.285584  265161 cache.go:107] acquiring lock: {Name:mka236661706a3579df9020867bc2d663aaca30d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:08:31.285981  265161 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 196.727µs
	I1212 20:08:31.286003  265161 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1212 20:08:31.285595  265161 cache.go:107] acquiring lock: {Name:mk82c937e9f82a7a532182865f786f0506a4e889 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:08:31.286142  265161 cache.go:115] /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1212 20:08:31.286159  265161 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 583.078µs
	I1212 20:08:31.286171  265161 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1212 20:08:31.286192  265161 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 20:08:31.287080  265161 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 20:08:31.287074  265161 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 20:08:31.287079  265161 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 20:08:31.287076  265161 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1212 20:08:31.287134  265161 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 20:08:31.306096  265161 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 20:08:31.306111  265161 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 20:08:31.306125  265161 cache.go:243] Successfully downloaded all kic artifacts
	I1212 20:08:31.306149  265161 start.go:360] acquireMachinesLock for no-preload-753103: {Name:mk75e497173a23050868488b8602a26938335e69 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:08:31.306219  265161 start.go:364] duration metric: took 56.487µs to acquireMachinesLock for "no-preload-753103"
	I1212 20:08:31.306239  265161 start.go:93] Provisioning new machine with config: &{Name:no-preload-753103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-753103 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:08:31.306336  265161 start.go:125] createHost starting for "" (driver="docker")
	I1212 20:08:30.772434  260486 cli_runner.go:164] Run: docker network inspect old-k8s-version-824670 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:08:30.789456  260486 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1212 20:08:30.793651  260486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:08:30.803696  260486 kubeadm.go:884] updating cluster {Name:old-k8s-version-824670 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-824670 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 20:08:30.803831  260486 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1212 20:08:30.803887  260486 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:08:30.833507  260486 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:08:30.833525  260486 crio.go:433] Images already preloaded, skipping extraction
	I1212 20:08:30.833569  260486 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:08:30.859561  260486 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:08:30.859578  260486 cache_images.go:86] Images are preloaded, skipping loading
	I1212 20:08:30.859587  260486 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.28.0 crio true true} ...
	I1212 20:08:30.859660  260486 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-824670 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-824670 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 20:08:30.859718  260486 ssh_runner.go:195] Run: crio config
	I1212 20:08:30.909806  260486 cni.go:84] Creating CNI manager for ""
	I1212 20:08:30.909829  260486 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:08:30.909846  260486 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 20:08:30.909865  260486 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-824670 NodeName:old-k8s-version-824670 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:08:30.909984  260486 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-824670"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:08:30.910038  260486 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1212 20:08:30.927970  260486 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 20:08:30.928031  260486 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 20:08:30.935701  260486 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1212 20:08:30.948106  260486 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 20:08:30.966056  260486 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1212 20:08:30.977843  260486 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1212 20:08:30.981240  260486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:08:30.990854  260486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:08:31.074196  260486 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:08:31.096647  260486 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670 for IP: 192.168.94.2
	I1212 20:08:31.096667  260486 certs.go:195] generating shared ca certs ...
	I1212 20:08:31.096684  260486 certs.go:227] acquiring lock for ca certs: {Name:mk2b2b547b64ae18c808f2dedd3d3cb8fa4a59ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:08:31.096817  260486 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-5703/.minikube/ca.key
	I1212 20:08:31.096872  260486 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.key
	I1212 20:08:31.096885  260486 certs.go:257] generating profile certs ...
	I1212 20:08:31.096951  260486 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/client.key
	I1212 20:08:31.096976  260486 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/client.crt with IP's: []
	I1212 20:08:31.192438  260486 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/client.crt ...
	I1212 20:08:31.192469  260486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/client.crt: {Name:mk4c392339d0b9d3aa04bd97e3fb072c90819343 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:08:31.192669  260486 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/client.key ...
	I1212 20:08:31.192691  260486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/client.key: {Name:mke0d0cd7cd4d72fa8714feae70851928bd527b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:08:31.192815  260486 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/apiserver.key.e581b2fa
	I1212 20:08:31.192840  260486 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/apiserver.crt.e581b2fa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1212 20:08:31.330627  260486 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/apiserver.crt.e581b2fa ...
	I1212 20:08:31.330656  260486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/apiserver.crt.e581b2fa: {Name:mk7e495d4963b24e297aa0a63e83e07a95cc593d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:08:31.330797  260486 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/apiserver.key.e581b2fa ...
	I1212 20:08:31.330820  260486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/apiserver.key.e581b2fa: {Name:mk911bdcbf5d97cdec932d03c3a8dfc9d8038cd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:08:31.330947  260486 certs.go:382] copying /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/apiserver.crt.e581b2fa -> /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/apiserver.crt
	I1212 20:08:31.331039  260486 certs.go:386] copying /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/apiserver.key.e581b2fa -> /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/apiserver.key
	I1212 20:08:31.331126  260486 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/proxy-client.key
	I1212 20:08:31.331149  260486 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/proxy-client.crt with IP's: []
	I1212 20:08:31.369158  260486 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/proxy-client.crt ...
	I1212 20:08:31.369188  260486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/proxy-client.crt: {Name:mk3b92e9fad6e762611d14414653f488eb2e03a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:08:31.369379  260486 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/proxy-client.key ...
	I1212 20:08:31.369400  260486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/proxy-client.key: {Name:mkd01137ff6e745dad2c96283b958e8c28f025b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:08:31.369629  260486 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/9254.pem (1338 bytes)
	W1212 20:08:31.369690  260486 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-5703/.minikube/certs/9254_empty.pem, impossibly tiny 0 bytes
	I1212 20:08:31.369707  260486 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 20:08:31.369756  260486 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem (1078 bytes)
	I1212 20:08:31.369807  260486 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:08:31.369843  260486 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/key.pem (1679 bytes)
	I1212 20:08:31.369907  260486 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/ssl/certs/92542.pem (1708 bytes)
	I1212 20:08:31.370813  260486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:08:31.392387  260486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 20:08:31.410586  260486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:08:31.427597  260486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 20:08:31.444678  260486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1212 20:08:31.462420  260486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 20:08:31.484025  260486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:08:31.502883  260486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 20:08:31.519509  260486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/ssl/certs/92542.pem --> /usr/share/ca-certificates/92542.pem (1708 bytes)
	I1212 20:08:31.539291  260486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:08:31.556739  260486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/certs/9254.pem --> /usr/share/ca-certificates/9254.pem (1338 bytes)
	I1212 20:08:31.575731  260486 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 20:08:31.589501  260486 ssh_runner.go:195] Run: openssl version
	I1212 20:08:31.596204  260486 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/92542.pem
	I1212 20:08:31.603839  260486 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/92542.pem /etc/ssl/certs/92542.pem
	I1212 20:08:31.613484  260486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92542.pem
	I1212 20:08:31.618313  260486 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:38 /usr/share/ca-certificates/92542.pem
	I1212 20:08:31.618379  260486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92542.pem
	I1212 20:08:31.670580  260486 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 20:08:31.678982  260486 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/92542.pem /etc/ssl/certs/3ec20f2e.0
	I1212 20:08:31.687044  260486 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:08:31.696350  260486 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 20:08:31.710063  260486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:08:31.714247  260486 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:08:31.714316  260486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:08:31.752872  260486 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 20:08:31.763140  260486 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 20:08:31.772375  260486 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9254.pem
	I1212 20:08:31.781503  260486 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9254.pem /etc/ssl/certs/9254.pem
	I1212 20:08:31.793987  260486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9254.pem
	I1212 20:08:31.798570  260486 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:38 /usr/share/ca-certificates/9254.pem
	I1212 20:08:31.798624  260486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9254.pem
	I1212 20:08:31.841677  260486 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 20:08:31.849760  260486 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9254.pem /etc/ssl/certs/51391683.0
	I1212 20:08:31.857153  260486 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:08:31.860851  260486 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 20:08:31.860909  260486 kubeadm.go:401] StartCluster: {Name:old-k8s-version-824670 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-824670 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:08:31.860978  260486 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:08:31.861031  260486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:08:31.892029  260486 cri.go:89] found id: ""
	I1212 20:08:31.892096  260486 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 20:08:31.901437  260486 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 20:08:31.909691  260486 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 20:08:31.909756  260486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:08:31.917796  260486 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 20:08:31.917815  260486 kubeadm.go:158] found existing configuration files:
	
	I1212 20:08:31.917871  260486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 20:08:31.926266  260486 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 20:08:31.926385  260486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 20:08:31.934965  260486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 20:08:31.942956  260486 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 20:08:31.943009  260486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:08:31.950203  260486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 20:08:31.957930  260486 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 20:08:31.957976  260486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:08:31.965478  260486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 20:08:31.973421  260486 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 20:08:31.973464  260486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:08:31.981528  260486 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 20:08:32.035028  260486 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1212 20:08:32.035194  260486 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 20:08:32.086191  260486 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 20:08:32.086318  260486 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1212 20:08:32.086402  260486 kubeadm.go:319] OS: Linux
	I1212 20:08:32.086503  260486 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 20:08:32.086594  260486 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 20:08:32.086708  260486 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 20:08:32.086796  260486 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 20:08:32.086861  260486 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 20:08:32.086936  260486 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 20:08:32.086999  260486 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 20:08:32.087080  260486 kubeadm.go:319] CGROUPS_IO: enabled
	I1212 20:08:32.169567  260486 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 20:08:32.169691  260486 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 20:08:32.169826  260486 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 20:08:32.337887  260486 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 20:08:29.693371  244825 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 20:08:29.693762  244825 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 20:08:29.693817  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:08:29.693879  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:08:29.727809  244825 cri.go:89] found id: "ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:08:29.727836  244825 cri.go:89] found id: ""
	I1212 20:08:29.727845  244825 logs.go:282] 1 containers: [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c]
	I1212 20:08:29.727900  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:29.731629  244825 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:08:29.731683  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:08:29.765822  244825 cri.go:89] found id: ""
	I1212 20:08:29.765846  244825 logs.go:282] 0 containers: []
	W1212 20:08:29.765856  244825 logs.go:284] No container was found matching "etcd"
	I1212 20:08:29.765863  244825 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:08:29.765914  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:08:29.800777  244825 cri.go:89] found id: ""
	I1212 20:08:29.800799  244825 logs.go:282] 0 containers: []
	W1212 20:08:29.800807  244825 logs.go:284] No container was found matching "coredns"
	I1212 20:08:29.800814  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:08:29.800865  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:08:29.836516  244825 cri.go:89] found id: "8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:08:29.836537  244825 cri.go:89] found id: ""
	I1212 20:08:29.836547  244825 logs.go:282] 1 containers: [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08]
	I1212 20:08:29.836608  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:29.840199  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:08:29.840249  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:08:29.878352  244825 cri.go:89] found id: ""
	I1212 20:08:29.878378  244825 logs.go:282] 0 containers: []
	W1212 20:08:29.878389  244825 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:29.878397  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:08:29.878445  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:08:29.917800  244825 cri.go:89] found id: "509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:08:29.917820  244825 cri.go:89] found id: "2212d82eda0761e0cee45e73bdefc45434bdfe80e6af42ef1304e448dc31b61d"
	I1212 20:08:29.917824  244825 cri.go:89] found id: ""
	I1212 20:08:29.917831  244825 logs.go:282] 2 containers: [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56 2212d82eda0761e0cee45e73bdefc45434bdfe80e6af42ef1304e448dc31b61d]
	I1212 20:08:29.917872  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:29.921527  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:29.924831  244825 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:08:29.924885  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:08:29.957241  244825 cri.go:89] found id: ""
	I1212 20:08:29.957262  244825 logs.go:282] 0 containers: []
	W1212 20:08:29.957285  244825 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:29.957294  244825 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:08:29.957346  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:08:29.991264  244825 cri.go:89] found id: ""
	I1212 20:08:29.991298  244825 logs.go:282] 0 containers: []
	W1212 20:08:29.991307  244825 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:08:29.991323  244825 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:29.991340  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:30.011806  244825 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:30.011836  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:30.080991  244825 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:30.081019  244825 logs.go:123] Gathering logs for kube-apiserver [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c] ...
	I1212 20:08:30.081034  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:08:30.124508  244825 logs.go:123] Gathering logs for kube-scheduler [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08] ...
	I1212 20:08:30.124539  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:08:30.213637  244825 logs.go:123] Gathering logs for kube-controller-manager [2212d82eda0761e0cee45e73bdefc45434bdfe80e6af42ef1304e448dc31b61d] ...
	I1212 20:08:30.213680  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2212d82eda0761e0cee45e73bdefc45434bdfe80e6af42ef1304e448dc31b61d"
	I1212 20:08:30.258769  244825 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:08:30.258798  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:08:30.306688  244825 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:30.306713  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:30.396238  244825 logs.go:123] Gathering logs for kube-controller-manager [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56] ...
	I1212 20:08:30.396266  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:08:30.430845  244825 logs.go:123] Gathering logs for container status ...
	I1212 20:08:30.430866  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:32.982013  244825 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 20:08:32.982369  244825 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 20:08:32.982419  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:08:32.982464  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:08:33.022928  244825 cri.go:89] found id: "ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:08:33.022949  244825 cri.go:89] found id: ""
	I1212 20:08:33.022959  244825 logs.go:282] 1 containers: [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c]
	I1212 20:08:33.023014  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:33.026655  244825 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:08:33.026717  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:08:33.062000  244825 cri.go:89] found id: ""
	I1212 20:08:33.062025  244825 logs.go:282] 0 containers: []
	W1212 20:08:33.062034  244825 logs.go:284] No container was found matching "etcd"
	I1212 20:08:33.062043  244825 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:08:33.062091  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:08:32.340117  260486 out.go:252]   - Generating certificates and keys ...
	I1212 20:08:32.340252  260486 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 20:08:32.340401  260486 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 20:08:32.456742  260486 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 20:08:32.638722  260486 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 20:08:32.772991  260486 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 20:08:32.881081  260486 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 20:08:33.278990  260486 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 20:08:33.279186  260486 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-824670] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1212 20:08:33.368521  260486 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 20:08:33.368733  260486 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-824670] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1212 20:08:33.585827  260486 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 20:08:33.670440  260486 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 20:08:33.853231  260486 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 20:08:33.853347  260486 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 20:08:33.955038  260486 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 20:08:29.269957  245478 logs.go:123] Gathering logs for container status ...
	I1212 20:08:29.269989  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:29.304137  245478 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:29.304171  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:29.317977  245478 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:29.318003  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:29.371837  245478 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:29.371857  245478 logs.go:123] Gathering logs for kube-scheduler [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849] ...
	I1212 20:08:29.371873  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:08:29.396860  245478 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:29.396890  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:31.977329  245478 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 20:08:31.977686  245478 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1212 20:08:31.977760  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:08:31.977818  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:08:32.006001  245478 cri.go:89] found id: "ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:08:32.006023  245478 cri.go:89] found id: ""
	I1212 20:08:32.006031  245478 logs.go:282] 1 containers: [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998]
	I1212 20:08:32.006087  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:32.011107  245478 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:08:32.011173  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:08:32.046300  245478 cri.go:89] found id: ""
	I1212 20:08:32.046325  245478 logs.go:282] 0 containers: []
	W1212 20:08:32.046336  245478 logs.go:284] No container was found matching "etcd"
	I1212 20:08:32.046344  245478 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:08:32.046404  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:08:32.086505  245478 cri.go:89] found id: ""
	I1212 20:08:32.086525  245478 logs.go:282] 0 containers: []
	W1212 20:08:32.086536  245478 logs.go:284] No container was found matching "coredns"
	I1212 20:08:32.086546  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:08:32.086599  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:08:32.126101  245478 cri.go:89] found id: "02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:08:32.126128  245478 cri.go:89] found id: ""
	I1212 20:08:32.126137  245478 logs.go:282] 1 containers: [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849]
	I1212 20:08:32.126181  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:32.130564  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:08:32.130629  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:08:32.162040  245478 cri.go:89] found id: ""
	I1212 20:08:32.162061  245478 logs.go:282] 0 containers: []
	W1212 20:08:32.162068  245478 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:32.162075  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:08:32.162131  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:08:32.191747  245478 cri.go:89] found id: "4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:08:32.191769  245478 cri.go:89] found id: ""
	I1212 20:08:32.191781  245478 logs.go:282] 1 containers: [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5]
	I1212 20:08:32.191846  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:32.196548  245478 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:08:32.196616  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:08:32.225129  245478 cri.go:89] found id: ""
	I1212 20:08:32.225156  245478 logs.go:282] 0 containers: []
	W1212 20:08:32.225172  245478 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:32.225178  245478 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:08:32.225223  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:08:32.254260  245478 cri.go:89] found id: ""
	I1212 20:08:32.254306  245478 logs.go:282] 0 containers: []
	W1212 20:08:32.254317  245478 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:08:32.254328  245478 logs.go:123] Gathering logs for container status ...
	I1212 20:08:32.254344  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:32.291476  245478 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:32.291506  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:32.387382  245478 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:32.387414  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:32.407725  245478 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:32.407753  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:32.475154  245478 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:32.475178  245478 logs.go:123] Gathering logs for kube-apiserver [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998] ...
	I1212 20:08:32.475198  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:08:32.511651  245478 logs.go:123] Gathering logs for kube-scheduler [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849] ...
	I1212 20:08:32.511682  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:08:32.546814  245478 logs.go:123] Gathering logs for kube-controller-manager [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5] ...
	I1212 20:08:32.546846  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:08:32.592402  245478 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:08:32.592437  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:08:34.278923  260486 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 20:08:34.442322  260486 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 20:08:34.597138  260486 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 20:08:34.597835  260486 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 20:08:34.601605  260486 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 20:08:31.307992  265161 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1212 20:08:31.308188  265161 start.go:159] libmachine.API.Create for "no-preload-753103" (driver="docker")
	I1212 20:08:31.308217  265161 client.go:173] LocalClient.Create starting
	I1212 20:08:31.308285  265161 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem
	I1212 20:08:31.308326  265161 main.go:143] libmachine: Decoding PEM data...
	I1212 20:08:31.308346  265161 main.go:143] libmachine: Parsing certificate...
	I1212 20:08:31.308413  265161 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem
	I1212 20:08:31.308445  265161 main.go:143] libmachine: Decoding PEM data...
	I1212 20:08:31.308464  265161 main.go:143] libmachine: Parsing certificate...
	I1212 20:08:31.308766  265161 cli_runner.go:164] Run: docker network inspect no-preload-753103 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 20:08:31.325247  265161 cli_runner.go:211] docker network inspect no-preload-753103 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 20:08:31.325324  265161 network_create.go:284] running [docker network inspect no-preload-753103] to gather additional debugging logs...
	I1212 20:08:31.325341  265161 cli_runner.go:164] Run: docker network inspect no-preload-753103
	W1212 20:08:31.342776  265161 cli_runner.go:211] docker network inspect no-preload-753103 returned with exit code 1
	I1212 20:08:31.342798  265161 network_create.go:287] error running [docker network inspect no-preload-753103]: docker network inspect no-preload-753103: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-753103 not found
	I1212 20:08:31.342808  265161 network_create.go:289] output of [docker network inspect no-preload-753103]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-753103 not found
	
	** /stderr **
	I1212 20:08:31.342875  265161 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:08:31.360868  265161 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-74442dadd84e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:ff:80:da:a9:72} reservation:<nil>}
	I1212 20:08:31.361566  265161 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-26148288ab51 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:82:49:cc:21:29:a7} reservation:<nil>}
	I1212 20:08:31.362255  265161 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3684d3b926aa IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2a:5e:c7:18:99:d2} reservation:<nil>}
	I1212 20:08:31.362898  265161 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-09b123768b60 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:2e:6c:50:8a:dd:de} reservation:<nil>}
	I1212 20:08:31.363656  265161 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021e7ff0}
	I1212 20:08:31.363681  265161 network_create.go:124] attempt to create docker network no-preload-753103 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1212 20:08:31.363714  265161 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-753103 no-preload-753103
	I1212 20:08:31.412633  265161 network_create.go:108] docker network no-preload-753103 192.168.85.0/24 created
	I1212 20:08:31.412659  265161 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-753103" container
	I1212 20:08:31.412703  265161 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 20:08:31.430605  265161 cli_runner.go:164] Run: docker volume create no-preload-753103 --label name.minikube.sigs.k8s.io=no-preload-753103 --label created_by.minikube.sigs.k8s.io=true
	I1212 20:08:31.448180  265161 oci.go:103] Successfully created a docker volume no-preload-753103
	I1212 20:08:31.448243  265161 cli_runner.go:164] Run: docker run --rm --name no-preload-753103-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-753103 --entrypoint /usr/bin/test -v no-preload-753103:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -d /var/lib
	I1212 20:08:31.467146  265161 cache.go:162] opening:  /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1212 20:08:31.482460  265161 cache.go:162] opening:  /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1212 20:08:31.498254  265161 cache.go:162] opening:  /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1212 20:08:31.505634  265161 cache.go:162] opening:  /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1212 20:08:31.584799  265161 cache.go:162] opening:  /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1212 20:08:31.838485  265161 cache.go:157] /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1212 20:08:31.838506  265161 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 552.91558ms
	I1212 20:08:31.838517  265161 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1212 20:08:31.868719  265161 oci.go:107] Successfully prepared a docker volume no-preload-753103
	I1212 20:08:31.868756  265161 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	W1212 20:08:31.868817  265161 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1212 20:08:31.868845  265161 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1212 20:08:31.868877  265161 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 20:08:31.924343  265161 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-753103 --name no-preload-753103 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-753103 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-753103 --network no-preload-753103 --ip 192.168.85.2 --volume no-preload-753103:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138
	I1212 20:08:32.238512  265161 cli_runner.go:164] Run: docker container inspect no-preload-753103 --format={{.State.Running}}
	I1212 20:08:32.262014  265161 cli_runner.go:164] Run: docker container inspect no-preload-753103 --format={{.State.Status}}
	I1212 20:08:32.284575  265161 cli_runner.go:164] Run: docker exec no-preload-753103 stat /var/lib/dpkg/alternatives/iptables
	I1212 20:08:32.338359  265161 oci.go:144] the created container "no-preload-753103" has a running status.
	I1212 20:08:32.338390  265161 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22112-5703/.minikube/machines/no-preload-753103/id_rsa...
	I1212 20:08:32.534469  265161 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22112-5703/.minikube/machines/no-preload-753103/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 20:08:32.573620  265161 cli_runner.go:164] Run: docker container inspect no-preload-753103 --format={{.State.Status}}
	I1212 20:08:32.597764  265161 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 20:08:32.597787  265161 kic_runner.go:114] Args: [docker exec --privileged no-preload-753103 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 20:08:32.648326  265161 cli_runner.go:164] Run: docker container inspect no-preload-753103 --format={{.State.Status}}
	I1212 20:08:32.670658  265161 machine.go:94] provisionDockerMachine start ...
	I1212 20:08:32.670742  265161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-753103
	I1212 20:08:32.689406  265161 main.go:143] libmachine: Using SSH client type: native
	I1212 20:08:32.689735  265161 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1212 20:08:32.689756  265161 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 20:08:32.830264  265161 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-753103
	
	I1212 20:08:32.830304  265161 ubuntu.go:182] provisioning hostname "no-preload-753103"
	I1212 20:08:32.830367  265161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-753103
	I1212 20:08:32.858721  265161 main.go:143] libmachine: Using SSH client type: native
	I1212 20:08:32.859040  265161 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1212 20:08:32.859060  265161 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-753103 && echo "no-preload-753103" | sudo tee /etc/hostname
	I1212 20:08:32.875331  265161 cache.go:157] /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1212 20:08:32.875364  265161 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 1.589749443s
	I1212 20:08:32.875379  265161 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1212 20:08:32.881317  265161 cache.go:157] /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1212 20:08:32.881350  265161 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 1.595684967s
	I1212 20:08:32.881378  265161 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1212 20:08:32.957822  265161 cache.go:157] /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1212 20:08:32.957856  265161 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 1.672286348s
	I1212 20:08:32.957886  265161 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1212 20:08:33.014922  265161 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-753103
	
	I1212 20:08:33.015021  265161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-753103
	I1212 20:08:33.035796  265161 main.go:143] libmachine: Using SSH client type: native
	I1212 20:08:33.036002  265161 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1212 20:08:33.036019  265161 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-753103' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-753103/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-753103' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:08:33.169615  265161 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:08:33.169648  265161 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-5703/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-5703/.minikube}
	I1212 20:08:33.169687  265161 ubuntu.go:190] setting up certificates
	I1212 20:08:33.169699  265161 provision.go:84] configureAuth start
	I1212 20:08:33.169751  265161 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-753103
	I1212 20:08:33.189995  265161 provision.go:143] copyHostCerts
	I1212 20:08:33.190053  265161 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-5703/.minikube/ca.pem, removing ...
	I1212 20:08:33.190075  265161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-5703/.minikube/ca.pem
	I1212 20:08:33.190156  265161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-5703/.minikube/ca.pem (1078 bytes)
	I1212 20:08:33.190294  265161 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-5703/.minikube/cert.pem, removing ...
	I1212 20:08:33.190308  265161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-5703/.minikube/cert.pem
	I1212 20:08:33.190352  265161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-5703/.minikube/cert.pem (1123 bytes)
	I1212 20:08:33.190458  265161 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-5703/.minikube/key.pem, removing ...
	I1212 20:08:33.190468  265161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-5703/.minikube/key.pem
	I1212 20:08:33.190507  265161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-5703/.minikube/key.pem (1679 bytes)
	I1212 20:08:33.190594  265161 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-5703/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca-key.pem org=jenkins.no-preload-753103 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-753103]
	I1212 20:08:33.229229  265161 provision.go:177] copyRemoteCerts
	I1212 20:08:33.229283  265161 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:08:33.229329  265161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-753103
	I1212 20:08:33.248580  265161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/no-preload-753103/id_rsa Username:docker}
	I1212 20:08:33.345211  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 20:08:33.363952  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 20:08:33.380629  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 20:08:33.398286  265161 provision.go:87] duration metric: took 228.554889ms to configureAuth
	I1212 20:08:33.398309  265161 ubuntu.go:206] setting minikube options for container-runtime
	I1212 20:08:33.398494  265161 config.go:182] Loaded profile config "no-preload-753103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:08:33.398605  265161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-753103
	I1212 20:08:33.417177  265161 main.go:143] libmachine: Using SSH client type: native
	I1212 20:08:33.417454  265161 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1212 20:08:33.417475  265161 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 20:08:33.694149  265161 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 20:08:33.694178  265161 machine.go:97] duration metric: took 1.023499062s to provisionDockerMachine
	I1212 20:08:33.694189  265161 client.go:176] duration metric: took 2.385962679s to LocalClient.Create
	I1212 20:08:33.694210  265161 start.go:167] duration metric: took 2.386023665s to libmachine.API.Create "no-preload-753103"
	I1212 20:08:33.694220  265161 start.go:293] postStartSetup for "no-preload-753103" (driver="docker")
	I1212 20:08:33.694231  265161 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:08:33.694304  265161 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:08:33.694355  265161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-753103
	I1212 20:08:33.715072  265161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/no-preload-753103/id_rsa Username:docker}
	I1212 20:08:33.810312  265161 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:08:33.813499  265161 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 20:08:33.813526  265161 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 20:08:33.813538  265161 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-5703/.minikube/addons for local assets ...
	I1212 20:08:33.813593  265161 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-5703/.minikube/files for local assets ...
	I1212 20:08:33.813695  265161 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/ssl/certs/92542.pem -> 92542.pem in /etc/ssl/certs
	I1212 20:08:33.813808  265161 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 20:08:33.820814  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/ssl/certs/92542.pem --> /etc/ssl/certs/92542.pem (1708 bytes)
	I1212 20:08:33.839116  265161 start.go:296] duration metric: took 144.885326ms for postStartSetup
	I1212 20:08:33.839503  265161 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-753103
	I1212 20:08:33.857255  265161 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/config.json ...
	I1212 20:08:33.857490  265161 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:08:33.857527  265161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-753103
	I1212 20:08:33.874112  265161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/no-preload-753103/id_rsa Username:docker}
	I1212 20:08:33.964709  265161 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 20:08:33.968848  265161 start.go:128] duration metric: took 2.662499242s to createHost
	I1212 20:08:33.968871  265161 start.go:83] releasing machines lock for "no-preload-753103", held for 2.662641224s
	I1212 20:08:33.968929  265161 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-753103
	I1212 20:08:33.986327  265161 ssh_runner.go:195] Run: cat /version.json
	I1212 20:08:33.986336  265161 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 20:08:33.986380  265161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-753103
	I1212 20:08:33.986424  265161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-753103
	I1212 20:08:34.005017  265161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/no-preload-753103/id_rsa Username:docker}
	I1212 20:08:34.005321  265161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/no-preload-753103/id_rsa Username:docker}
	I1212 20:08:34.535796  265161 cache.go:157] /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1212 20:08:34.535826  265161 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 3.250202446s
	I1212 20:08:34.535841  265161 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1212 20:08:34.535860  265161 cache.go:87] Successfully saved all images to host disk.
	I1212 20:08:34.535934  265161 ssh_runner.go:195] Run: systemctl --version
	I1212 20:08:34.542541  265161 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 20:08:34.574147  265161 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 20:08:34.578493  265161 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:08:34.578555  265161 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:08:34.603039  265161 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 20:08:34.603058  265161 start.go:496] detecting cgroup driver to use...
	I1212 20:08:34.603088  265161 detect.go:190] detected "systemd" cgroup driver on host os
	I1212 20:08:34.603139  265161 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:08:34.618612  265161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:08:34.629638  265161 docker.go:218] disabling cri-docker service (if available) ...
	I1212 20:08:34.629688  265161 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 20:08:34.649322  265161 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 20:08:34.670309  265161 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 20:08:34.757021  265161 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 20:08:34.839106  265161 docker.go:234] disabling docker service ...
	I1212 20:08:34.839216  265161 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 20:08:34.856734  265161 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 20:08:34.867912  265161 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 20:08:34.953792  265161 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 20:08:35.033898  265161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:08:35.045359  265161 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:08:35.058662  265161 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 20:08:35.058716  265161 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:08:35.068502  265161 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1212 20:08:35.068551  265161 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:08:35.076672  265161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:08:35.084637  265161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:08:35.092663  265161 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 20:08:35.099940  265161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:08:35.107809  265161 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:08:35.120180  265161 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:08:35.128194  265161 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 20:08:35.135103  265161 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 20:08:35.141721  265161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:08:35.232884  265161 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 20:08:35.547357  265161 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 20:08:35.547425  265161 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 20:08:35.551225  265161 start.go:564] Will wait 60s for crictl version
	I1212 20:08:35.551269  265161 ssh_runner.go:195] Run: which crictl
	I1212 20:08:35.554699  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 20:08:35.580586  265161 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 20:08:35.580666  265161 ssh_runner.go:195] Run: crio --version
	I1212 20:08:35.610027  265161 ssh_runner.go:195] Run: crio --version
	I1212 20:08:35.650172  265161 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1212 20:08:35.651208  265161 cli_runner.go:164] Run: docker network inspect no-preload-753103 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:08:35.671802  265161 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1212 20:08:35.675834  265161 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:08:35.687522  265161 kubeadm.go:884] updating cluster {Name:no-preload-753103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-753103 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 20:08:35.687645  265161 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 20:08:35.687687  265161 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:08:35.720446  265161 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1212 20:08:35.720473  265161 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 20:08:35.720540  265161 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:08:35.720542  265161 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 20:08:35.720585  265161 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 20:08:35.720611  265161 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1212 20:08:35.720637  265161 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1212 20:08:35.720670  265161 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 20:08:35.720633  265161 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 20:08:35.720585  265161 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1212 20:08:35.721893  265161 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1212 20:08:35.722358  265161 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 20:08:35.721924  265161 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 20:08:35.722255  265161 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1212 20:08:35.722459  265161 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 20:08:35.722895  265161 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 20:08:35.722909  265161 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1212 20:08:35.723093  265161 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:08:35.893614  265161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1212 20:08:35.901450  265161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 20:08:35.904167  265161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 20:08:35.909060  265161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 20:08:35.934614  265161 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1212 20:08:35.934658  265161 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1212 20:08:35.934696  265161 ssh_runner.go:195] Run: which crictl
	I1212 20:08:35.940118  265161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 20:08:35.943652  265161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1212 20:08:35.945466  265161 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc" in container runtime
	I1212 20:08:35.945515  265161 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 20:08:35.945560  265161 ssh_runner.go:195] Run: which crictl
	I1212 20:08:35.949005  265161 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b" in container runtime
	I1212 20:08:35.949039  265161 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 20:08:35.949083  265161 ssh_runner.go:195] Run: which crictl
	I1212 20:08:35.953393  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1212 20:08:35.953487  265161 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46" in container runtime
	I1212 20:08:35.953517  265161 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 20:08:35.953545  265161 ssh_runner.go:195] Run: which crictl
	I1212 20:08:35.985670  265161 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810" in container runtime
	I1212 20:08:35.985713  265161 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 20:08:35.985759  265161 ssh_runner.go:195] Run: which crictl
	I1212 20:08:35.988631  265161 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1212 20:08:35.988663  265161 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1212 20:08:35.988701  265161 ssh_runner.go:195] Run: which crictl
	I1212 20:08:35.988727  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 20:08:35.988784  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 20:08:35.988849  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1212 20:08:35.988856  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 20:08:35.993812  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 20:08:36.029565  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 20:08:36.029626  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1212 20:08:36.029847  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 20:08:36.030614  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 20:08:36.030657  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1212 20:08:36.032882  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 20:08:36.077038  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 20:08:36.077106  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1212 20:08:36.077115  265161 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1212 20:08:36.077139  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 20:08:36.077106  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 20:08:36.077162  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 20:08:36.077204  265161 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1212 20:08:36.118371  265161 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1212 20:08:36.118469  265161 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1212 20:08:36.118902  265161 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1212 20:08:36.118971  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1212 20:08:36.118991  265161 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1212 20:08:33.099183  244825 cri.go:89] found id: ""
	I1212 20:08:33.099216  244825 logs.go:282] 0 containers: []
	W1212 20:08:33.099225  244825 logs.go:284] No container was found matching "coredns"
	I1212 20:08:33.099230  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:08:33.099297  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:08:33.136038  244825 cri.go:89] found id: "8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:08:33.136060  244825 cri.go:89] found id: ""
	I1212 20:08:33.136068  244825 logs.go:282] 1 containers: [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08]
	I1212 20:08:33.136114  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:33.140063  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:08:33.140115  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:08:33.181121  244825 cri.go:89] found id: ""
	I1212 20:08:33.181146  244825 logs.go:282] 0 containers: []
	W1212 20:08:33.181153  244825 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:33.181159  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:08:33.181211  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:08:33.216582  244825 cri.go:89] found id: "509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:08:33.216603  244825 cri.go:89] found id: ""
	I1212 20:08:33.216613  244825 logs.go:282] 1 containers: [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56]
	I1212 20:08:33.216655  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:33.220373  244825 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:08:33.220433  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:08:33.257479  244825 cri.go:89] found id: ""
	I1212 20:08:33.257497  244825 logs.go:282] 0 containers: []
	W1212 20:08:33.257504  244825 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:33.257512  244825 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:08:33.257552  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:08:33.292293  244825 cri.go:89] found id: ""
	I1212 20:08:33.292319  244825 logs.go:282] 0 containers: []
	W1212 20:08:33.292329  244825 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:08:33.292340  244825 logs.go:123] Gathering logs for kube-scheduler [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08] ...
	I1212 20:08:33.292360  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:08:33.360026  244825 logs.go:123] Gathering logs for kube-controller-manager [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56] ...
	I1212 20:08:33.360053  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:08:33.399678  244825 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:08:33.399705  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:08:33.448360  244825 logs.go:123] Gathering logs for container status ...
	I1212 20:08:33.448385  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:33.490145  244825 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:33.490173  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:33.594524  244825 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:33.594554  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:33.611592  244825 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:33.611625  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:33.671922  244825 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:33.671945  244825 logs.go:123] Gathering logs for kube-apiserver [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c] ...
	I1212 20:08:33.671960  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:08:36.215338  244825 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 20:08:36.216243  244825 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 20:08:36.216319  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:08:36.216377  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:08:36.259376  244825 cri.go:89] found id: "ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:08:36.259405  244825 cri.go:89] found id: ""
	I1212 20:08:36.259416  244825 logs.go:282] 1 containers: [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c]
	I1212 20:08:36.259475  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:36.264302  244825 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:08:36.264367  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:08:36.310504  244825 cri.go:89] found id: ""
	I1212 20:08:36.310527  244825 logs.go:282] 0 containers: []
	W1212 20:08:36.310540  244825 logs.go:284] No container was found matching "etcd"
	I1212 20:08:36.310548  244825 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:08:36.310598  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:08:36.359952  244825 cri.go:89] found id: ""
	I1212 20:08:36.359980  244825 logs.go:282] 0 containers: []
	W1212 20:08:36.359991  244825 logs.go:284] No container was found matching "coredns"
	I1212 20:08:36.359999  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:08:36.360056  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:08:36.413009  244825 cri.go:89] found id: "8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:08:36.413041  244825 cri.go:89] found id: ""
	I1212 20:08:36.413051  244825 logs.go:282] 1 containers: [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08]
	I1212 20:08:36.413109  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:36.418548  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:08:36.418615  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:08:36.469051  244825 cri.go:89] found id: ""
	I1212 20:08:36.469093  244825 logs.go:282] 0 containers: []
	W1212 20:08:36.469103  244825 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:36.469111  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:08:36.469174  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:08:36.526264  244825 cri.go:89] found id: "509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:08:36.526295  244825 cri.go:89] found id: ""
	I1212 20:08:36.526305  244825 logs.go:282] 1 containers: [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56]
	I1212 20:08:36.526359  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:36.531961  244825 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:08:36.532028  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:08:36.583971  244825 cri.go:89] found id: ""
	I1212 20:08:36.584000  244825 logs.go:282] 0 containers: []
	W1212 20:08:36.584010  244825 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:36.584018  244825 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:08:36.584089  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:08:36.632000  244825 cri.go:89] found id: ""
	I1212 20:08:36.632027  244825 logs.go:282] 0 containers: []
	W1212 20:08:36.632037  244825 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:08:36.632048  244825 logs.go:123] Gathering logs for container status ...
	I1212 20:08:36.632070  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:36.676522  244825 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:36.676548  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:36.821545  244825 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:36.821581  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:36.840830  244825 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:36.840859  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:36.917924  244825 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:36.917946  244825 logs.go:123] Gathering logs for kube-apiserver [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c] ...
	I1212 20:08:36.917961  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:08:36.958297  244825 logs.go:123] Gathering logs for kube-scheduler [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08] ...
	I1212 20:08:36.958323  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:08:37.046321  244825 logs.go:123] Gathering logs for kube-controller-manager [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56] ...
	I1212 20:08:37.046353  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:08:37.085078  244825 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:08:37.085106  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:08:34.603237  260486 out.go:252]   - Booting up control plane ...
	I1212 20:08:34.603383  260486 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 20:08:34.603501  260486 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 20:08:34.604085  260486 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 20:08:34.617362  260486 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:08:34.618216  260486 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:08:34.618310  260486 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 20:08:34.730506  260486 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 20:08:35.173427  245478 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 20:08:35.173863  245478 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1212 20:08:35.173912  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:08:35.173951  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:08:35.207292  245478 cri.go:89] found id: "ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:08:35.207315  245478 cri.go:89] found id: ""
	I1212 20:08:35.207325  245478 logs.go:282] 1 containers: [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998]
	I1212 20:08:35.207385  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:35.211116  245478 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:08:35.211169  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:08:35.236404  245478 cri.go:89] found id: ""
	I1212 20:08:35.236428  245478 logs.go:282] 0 containers: []
	W1212 20:08:35.236438  245478 logs.go:284] No container was found matching "etcd"
	I1212 20:08:35.236445  245478 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:08:35.236492  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:08:35.262102  245478 cri.go:89] found id: ""
	I1212 20:08:35.262127  245478 logs.go:282] 0 containers: []
	W1212 20:08:35.262137  245478 logs.go:284] No container was found matching "coredns"
	I1212 20:08:35.262143  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:08:35.262185  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:08:35.286330  245478 cri.go:89] found id: "02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:08:35.286346  245478 cri.go:89] found id: ""
	I1212 20:08:35.286354  245478 logs.go:282] 1 containers: [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849]
	I1212 20:08:35.286399  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:35.290212  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:08:35.290258  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:08:35.317607  245478 cri.go:89] found id: ""
	I1212 20:08:35.317631  245478 logs.go:282] 0 containers: []
	W1212 20:08:35.317642  245478 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:35.317656  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:08:35.317702  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:08:35.343703  245478 cri.go:89] found id: "4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:08:35.343726  245478 cri.go:89] found id: ""
	I1212 20:08:35.343736  245478 logs.go:282] 1 containers: [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5]
	I1212 20:08:35.343780  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:35.347432  245478 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:08:35.347483  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:08:35.371903  245478 cri.go:89] found id: ""
	I1212 20:08:35.371933  245478 logs.go:282] 0 containers: []
	W1212 20:08:35.371940  245478 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:35.371948  245478 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:08:35.371986  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:08:35.396117  245478 cri.go:89] found id: ""
	I1212 20:08:35.396138  245478 logs.go:282] 0 containers: []
	W1212 20:08:35.396146  245478 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:08:35.396155  245478 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:35.396165  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:35.478108  245478 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:35.478137  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:35.492265  245478 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:35.492315  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:35.546987  245478 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:35.547004  245478 logs.go:123] Gathering logs for kube-apiserver [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998] ...
	I1212 20:08:35.547024  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:08:35.582658  245478 logs.go:123] Gathering logs for kube-scheduler [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849] ...
	I1212 20:08:35.582684  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:08:35.610932  245478 logs.go:123] Gathering logs for kube-controller-manager [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5] ...
	I1212 20:08:35.610969  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:08:35.644808  245478 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:08:35.644845  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:08:35.708461  245478 logs.go:123] Gathering logs for container status ...
	I1212 20:08:35.708494  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:38.251991  245478 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 20:08:38.252425  245478 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1212 20:08:38.252477  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:08:38.252531  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:08:38.278786  245478 cri.go:89] found id: "ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:08:38.278805  245478 cri.go:89] found id: ""
	I1212 20:08:38.278815  245478 logs.go:282] 1 containers: [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998]
	I1212 20:08:38.278866  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:38.282629  245478 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:08:38.282687  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:08:38.308302  245478 cri.go:89] found id: ""
	I1212 20:08:38.308323  245478 logs.go:282] 0 containers: []
	W1212 20:08:38.308331  245478 logs.go:284] No container was found matching "etcd"
	I1212 20:08:38.308336  245478 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:08:38.308380  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:08:38.333938  245478 cri.go:89] found id: ""
	I1212 20:08:38.333960  245478 logs.go:282] 0 containers: []
	W1212 20:08:38.333970  245478 logs.go:284] No container was found matching "coredns"
	I1212 20:08:38.333978  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:08:38.334032  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:08:38.358789  245478 cri.go:89] found id: "02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:08:38.358809  245478 cri.go:89] found id: ""
	I1212 20:08:38.358820  245478 logs.go:282] 1 containers: [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849]
	I1212 20:08:38.358876  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:38.362770  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:08:38.362830  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:08:38.388459  245478 cri.go:89] found id: ""
	I1212 20:08:38.388484  245478 logs.go:282] 0 containers: []
	W1212 20:08:38.388493  245478 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:38.388498  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:08:38.388539  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:08:38.414949  245478 cri.go:89] found id: "4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:08:38.414970  245478 cri.go:89] found id: ""
	I1212 20:08:38.414978  245478 logs.go:282] 1 containers: [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5]
	I1212 20:08:38.415017  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:38.418784  245478 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:08:38.418840  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:08:38.445333  245478 cri.go:89] found id: ""
	I1212 20:08:38.445355  245478 logs.go:282] 0 containers: []
	W1212 20:08:38.445363  245478 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:38.445371  245478 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:08:38.445427  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:08:38.469911  245478 cri.go:89] found id: ""
	I1212 20:08:38.469934  245478 logs.go:282] 0 containers: []
	W1212 20:08:38.469950  245478 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:08:38.469961  245478 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:38.469972  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:38.523899  245478 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:38.523918  245478 logs.go:123] Gathering logs for kube-apiserver [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998] ...
	I1212 20:08:38.523931  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:08:38.553594  245478 logs.go:123] Gathering logs for kube-scheduler [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849] ...
	I1212 20:08:38.553621  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:08:38.579102  245478 logs.go:123] Gathering logs for kube-controller-manager [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5] ...
	I1212 20:08:38.579124  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:08:38.606622  245478 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:08:38.606656  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:08:38.668913  245478 logs.go:123] Gathering logs for container status ...
	I1212 20:08:38.668946  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:38.700862  245478 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:38.700887  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:38.800352  245478 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:38.800382  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:39.733070  260486 kubeadm.go:319] [apiclient] All control plane components are healthy after 5.002751 seconds
	I1212 20:08:39.733209  260486 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 20:08:39.747288  260486 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 20:08:40.267995  260486 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 20:08:40.268329  260486 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-824670 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 20:08:40.778654  260486 kubeadm.go:319] [bootstrap-token] Using token: 0rx6pa.vzh88q7v9ne7n54f
	I1212 20:08:36.123521  265161 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1212 20:08:36.123557  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (25788928 bytes)
	I1212 20:08:36.123675  265161 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1212 20:08:36.123676  265161 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1212 20:08:36.123750  265161 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1212 20:08:36.123717  265161 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1212 20:08:36.123811  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1212 20:08:36.124050  265161 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1212 20:08:36.124964  265161 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1212 20:08:36.124988  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (23131648 bytes)
	I1212 20:08:36.146201  265161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:08:36.170995  265161 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1212 20:08:36.171003  265161 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0
	I1212 20:08:36.171016  265161 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1212 20:08:36.171026  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (17239040 bytes)
	I1212 20:08:36.171037  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (27682304 bytes)
	I1212 20:08:36.171119  265161 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1212 20:08:36.294544  265161 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1212 20:08:36.294589  265161 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:08:36.294636  265161 ssh_runner.go:195] Run: which crictl
	I1212 20:08:36.294698  265161 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1212 20:08:36.294713  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1212 20:08:36.368417  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:08:36.440783  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:08:36.513083  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:08:36.542727  265161 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1212 20:08:36.542788  265161 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1212 20:08:36.588857  265161 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1212 20:08:36.588964  265161 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1212 20:08:36.995783  265161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1212 20:08:38.125793  265161 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.582980729s)
	I1212 20:08:38.125824  265161 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1212 20:08:38.125835  265161 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.536853615s)
	I1212 20:08:38.125845  265161 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1212 20:08:38.125861  265161 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1212 20:08:38.125880  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1212 20:08:38.125890  265161 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1212 20:08:38.125885  265161 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1: (1.130062386s)
	I1212 20:08:38.125951  265161 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1212 20:08:38.125981  265161 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1212 20:08:38.126011  265161 ssh_runner.go:195] Run: which crictl
	I1212 20:08:39.419373  265161 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.293450663s)
	I1212 20:08:39.419406  265161 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1212 20:08:39.419423  265161 ssh_runner.go:235] Completed: which crictl: (1.293391671s)
	I1212 20:08:39.419431  265161 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1212 20:08:39.419486  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1212 20:08:39.419530  265161 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1212 20:08:40.626387  265161 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.206825785s)
	I1212 20:08:40.626419  265161 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1212 20:08:40.626439  265161 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1: (1.206940412s)
	I1212 20:08:40.626442  265161 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1212 20:08:40.626487  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1212 20:08:40.626489  265161 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1212 20:08:40.780171  260486 out.go:252]   - Configuring RBAC rules ...
	I1212 20:08:40.780324  260486 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 20:08:40.785141  260486 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 20:08:40.790671  260486 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 20:08:40.793340  260486 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 20:08:40.795902  260486 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 20:08:40.798592  260486 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 20:08:40.807781  260486 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 20:08:41.000066  260486 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1212 20:08:41.189309  260486 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1212 20:08:41.190027  260486 kubeadm.go:319] 
	I1212 20:08:41.190137  260486 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1212 20:08:41.190156  260486 kubeadm.go:319] 
	I1212 20:08:41.190256  260486 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1212 20:08:41.190268  260486 kubeadm.go:319] 
	I1212 20:08:41.190322  260486 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1212 20:08:41.190415  260486 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 20:08:41.190492  260486 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 20:08:41.190505  260486 kubeadm.go:319] 
	I1212 20:08:41.190591  260486 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1212 20:08:41.190600  260486 kubeadm.go:319] 
	I1212 20:08:41.190678  260486 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 20:08:41.190694  260486 kubeadm.go:319] 
	I1212 20:08:41.190764  260486 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1212 20:08:41.190879  260486 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 20:08:41.190990  260486 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 20:08:41.191006  260486 kubeadm.go:319] 
	I1212 20:08:41.191123  260486 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 20:08:41.191229  260486 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1212 20:08:41.191239  260486 kubeadm.go:319] 
	I1212 20:08:41.191381  260486 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 0rx6pa.vzh88q7v9ne7n54f \
	I1212 20:08:41.191525  260486 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c49fb95e06ac971f3e9f3fbce39707c55b5d19bac31f7f0c0750449d5e9f38c \
	I1212 20:08:41.191584  260486 kubeadm.go:319] 	--control-plane 
	I1212 20:08:41.191596  260486 kubeadm.go:319] 
	I1212 20:08:41.191705  260486 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1212 20:08:41.191722  260486 kubeadm.go:319] 
	I1212 20:08:41.191855  260486 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 0rx6pa.vzh88q7v9ne7n54f \
	I1212 20:08:41.192005  260486 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c49fb95e06ac971f3e9f3fbce39707c55b5d19bac31f7f0c0750449d5e9f38c 
	I1212 20:08:41.194559  260486 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1212 20:08:41.194738  260486 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 20:08:41.194765  260486 cni.go:84] Creating CNI manager for ""
	I1212 20:08:41.194777  260486 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:08:41.197043  260486 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1212 20:08:39.639209  244825 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 20:08:39.639609  244825 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 20:08:39.639659  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:08:39.639704  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:08:39.673665  244825 cri.go:89] found id: "ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:08:39.673684  244825 cri.go:89] found id: ""
	I1212 20:08:39.673692  244825 logs.go:282] 1 containers: [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c]
	I1212 20:08:39.673740  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:39.677310  244825 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:08:39.677373  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:08:39.710524  244825 cri.go:89] found id: ""
	I1212 20:08:39.710549  244825 logs.go:282] 0 containers: []
	W1212 20:08:39.710560  244825 logs.go:284] No container was found matching "etcd"
	I1212 20:08:39.710568  244825 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:08:39.710619  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:08:39.744786  244825 cri.go:89] found id: ""
	I1212 20:08:39.744811  244825 logs.go:282] 0 containers: []
	W1212 20:08:39.744822  244825 logs.go:284] No container was found matching "coredns"
	I1212 20:08:39.744830  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:08:39.744884  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:08:39.781015  244825 cri.go:89] found id: "8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:08:39.781037  244825 cri.go:89] found id: ""
	I1212 20:08:39.781046  244825 logs.go:282] 1 containers: [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08]
	I1212 20:08:39.781109  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:39.784783  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:08:39.784845  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:08:39.819095  244825 cri.go:89] found id: ""
	I1212 20:08:39.819117  244825 logs.go:282] 0 containers: []
	W1212 20:08:39.819131  244825 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:39.819139  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:08:39.819190  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:08:39.860027  244825 cri.go:89] found id: "509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:08:39.860048  244825 cri.go:89] found id: ""
	I1212 20:08:39.860058  244825 logs.go:282] 1 containers: [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56]
	I1212 20:08:39.860119  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:39.864652  244825 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:08:39.864719  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:08:39.904929  244825 cri.go:89] found id: ""
	I1212 20:08:39.904955  244825 logs.go:282] 0 containers: []
	W1212 20:08:39.904966  244825 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:39.904974  244825 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:08:39.905029  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:08:39.941704  244825 cri.go:89] found id: ""
	I1212 20:08:39.941728  244825 logs.go:282] 0 containers: []
	W1212 20:08:39.941742  244825 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:08:39.941752  244825 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:39.941767  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:39.959527  244825 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:39.959563  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:40.018952  244825 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:40.018969  244825 logs.go:123] Gathering logs for kube-apiserver [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c] ...
	I1212 20:08:40.018983  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:08:40.058375  244825 logs.go:123] Gathering logs for kube-scheduler [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08] ...
	I1212 20:08:40.058407  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:08:40.128094  244825 logs.go:123] Gathering logs for kube-controller-manager [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56] ...
	I1212 20:08:40.128121  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:08:40.163906  244825 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:08:40.163930  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:08:40.225409  244825 logs.go:123] Gathering logs for container status ...
	I1212 20:08:40.225443  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:40.271236  244825 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:40.271264  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:42.870874  244825 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 20:08:42.871257  244825 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 20:08:42.871335  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:08:42.871382  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:08:42.906933  244825 cri.go:89] found id: "ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:08:42.906955  244825 cri.go:89] found id: ""
	I1212 20:08:42.906964  244825 logs.go:282] 1 containers: [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c]
	I1212 20:08:42.907022  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:42.910857  244825 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:08:42.910919  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:08:42.944269  244825 cri.go:89] found id: ""
	I1212 20:08:42.944314  244825 logs.go:282] 0 containers: []
	W1212 20:08:42.944324  244825 logs.go:284] No container was found matching "etcd"
	I1212 20:08:42.944331  244825 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:08:42.944391  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:08:42.979080  244825 cri.go:89] found id: ""
	I1212 20:08:42.979107  244825 logs.go:282] 0 containers: []
	W1212 20:08:42.979116  244825 logs.go:284] No container was found matching "coredns"
	I1212 20:08:42.979123  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:08:42.979173  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:08:43.014534  244825 cri.go:89] found id: "8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:08:43.014555  244825 cri.go:89] found id: ""
	I1212 20:08:43.014564  244825 logs.go:282] 1 containers: [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08]
	I1212 20:08:43.014607  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:43.018363  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:08:43.018423  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:08:43.054456  244825 cri.go:89] found id: ""
	I1212 20:08:43.054483  244825 logs.go:282] 0 containers: []
	W1212 20:08:43.054494  244825 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:43.054502  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:08:43.054564  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:08:41.198353  260486 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 20:08:41.203231  260486 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1212 20:08:41.203249  260486 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1212 20:08:41.217767  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 20:08:42.013338  260486 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 20:08:42.013435  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:42.013452  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-824670 minikube.k8s.io/updated_at=2025_12_12T20_08_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300 minikube.k8s.io/name=old-k8s-version-824670 minikube.k8s.io/primary=true
	I1212 20:08:42.022701  260486 ops.go:34] apiserver oom_adj: -16
	I1212 20:08:42.089669  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:42.590462  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:43.090516  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:43.590299  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:41.318904  245478 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 20:08:41.319393  245478 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1212 20:08:41.319446  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:08:41.319492  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:08:41.371691  245478 cri.go:89] found id: "ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:08:41.371712  245478 cri.go:89] found id: ""
	I1212 20:08:41.371722  245478 logs.go:282] 1 containers: [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998]
	I1212 20:08:41.371782  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:41.376689  245478 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:08:41.376754  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:08:41.407123  245478 cri.go:89] found id: ""
	I1212 20:08:41.407154  245478 logs.go:282] 0 containers: []
	W1212 20:08:41.407165  245478 logs.go:284] No container was found matching "etcd"
	I1212 20:08:41.407173  245478 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:08:41.407237  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:08:41.438656  245478 cri.go:89] found id: ""
	I1212 20:08:41.438682  245478 logs.go:282] 0 containers: []
	W1212 20:08:41.438693  245478 logs.go:284] No container was found matching "coredns"
	I1212 20:08:41.438701  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:08:41.438753  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:08:41.471830  245478 cri.go:89] found id: "02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:08:41.471851  245478 cri.go:89] found id: ""
	I1212 20:08:41.471861  245478 logs.go:282] 1 containers: [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849]
	I1212 20:08:41.471917  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:41.477076  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:08:41.477136  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:08:41.510968  245478 cri.go:89] found id: ""
	I1212 20:08:41.510995  245478 logs.go:282] 0 containers: []
	W1212 20:08:41.511006  245478 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:41.511014  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:08:41.511073  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:08:41.545107  245478 cri.go:89] found id: "4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:08:41.545132  245478 cri.go:89] found id: ""
	I1212 20:08:41.545235  245478 logs.go:282] 1 containers: [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5]
	I1212 20:08:41.545321  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:41.550045  245478 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:08:41.550115  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:08:41.580785  245478 cri.go:89] found id: ""
	I1212 20:08:41.580815  245478 logs.go:282] 0 containers: []
	W1212 20:08:41.580827  245478 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:41.580834  245478 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:08:41.580892  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:08:41.614524  245478 cri.go:89] found id: ""
	I1212 20:08:41.614551  245478 logs.go:282] 0 containers: []
	W1212 20:08:41.614562  245478 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:08:41.614573  245478 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:41.614589  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:41.633086  245478 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:41.633115  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:41.707740  245478 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:41.707761  245478 logs.go:123] Gathering logs for kube-apiserver [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998] ...
	I1212 20:08:41.707782  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:08:41.745479  245478 logs.go:123] Gathering logs for kube-scheduler [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849] ...
	I1212 20:08:41.745512  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:08:41.779160  245478 logs.go:123] Gathering logs for kube-controller-manager [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5] ...
	I1212 20:08:41.779189  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:08:41.808978  245478 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:08:41.809012  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:08:41.876306  245478 logs.go:123] Gathering logs for container status ...
	I1212 20:08:41.876344  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:41.911771  245478 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:41.911802  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:41.829031  265161 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.202450399s)
	I1212 20:08:41.829087  265161 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1212 20:08:41.829085  265161 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1: (1.202568652s)
	I1212 20:08:41.829133  265161 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1212 20:08:41.829151  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1212 20:08:41.829192  265161 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0
	I1212 20:08:43.394225  265161 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0: (1.565010892s)
	I1212 20:08:43.394260  265161 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1212 20:08:43.394303  265161 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1212 20:08:43.394304  265161 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1: (1.565114428s)
	I1212 20:08:43.394347  265161 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1212 20:08:43.394349  265161 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1212 20:08:43.394474  265161 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1212 20:08:44.756482  265161 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.362109676s)
	I1212 20:08:44.756515  265161 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1212 20:08:44.756527  265161 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: (1.362036501s)
	I1212 20:08:44.756535  265161 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1212 20:08:44.756551  265161 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1212 20:08:44.756576  265161 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1212 20:08:44.756573  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1212 20:08:45.296360  265161 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1212 20:08:45.296396  265161 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1212 20:08:45.296436  265161 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1212 20:08:45.404602  265161 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1212 20:08:45.404643  265161 cache_images.go:125] Successfully loaded all cached images
	I1212 20:08:45.404651  265161 cache_images.go:94] duration metric: took 9.684163438s to LoadCachedImages
	I1212 20:08:45.404665  265161 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1212 20:08:45.404775  265161 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-753103 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-753103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 20:08:45.404867  265161 ssh_runner.go:195] Run: crio config
	I1212 20:08:45.446680  265161 cni.go:84] Creating CNI manager for ""
	I1212 20:08:45.446697  265161 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:08:45.446710  265161 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 20:08:45.446728  265161 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-753103 NodeName:no-preload-753103 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:08:45.446833  265161 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-753103"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:08:45.446894  265161 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 20:08:45.454665  265161 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1212 20:08:45.454709  265161 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 20:08:45.462825  265161 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256
	I1212 20:08:45.462889  265161 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/22112-5703/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet
	I1212 20:08:45.462910  265161 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1212 20:08:45.462970  265161 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/22112-5703/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm
	I1212 20:08:45.466674  265161 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1212 20:08:45.466700  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (58589368 bytes)
	I1212 20:08:43.092398  244825 cri.go:89] found id: "509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:08:43.092418  244825 cri.go:89] found id: ""
	I1212 20:08:43.092429  244825 logs.go:282] 1 containers: [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56]
	I1212 20:08:43.092488  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:43.096600  244825 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:08:43.096666  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:08:43.139162  244825 cri.go:89] found id: ""
	I1212 20:08:43.139186  244825 logs.go:282] 0 containers: []
	W1212 20:08:43.139197  244825 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:43.139206  244825 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:08:43.139264  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:08:43.179247  244825 cri.go:89] found id: ""
	I1212 20:08:43.179291  244825 logs.go:282] 0 containers: []
	W1212 20:08:43.179302  244825 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:08:43.179313  244825 logs.go:123] Gathering logs for kube-controller-manager [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56] ...
	I1212 20:08:43.179328  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:08:43.222396  244825 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:08:43.222425  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:08:43.280020  244825 logs.go:123] Gathering logs for container status ...
	I1212 20:08:43.280059  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:43.319896  244825 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:43.319930  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:43.426931  244825 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:43.426963  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:43.442691  244825 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:43.442715  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:43.502328  244825 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:43.502348  244825 logs.go:123] Gathering logs for kube-apiserver [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c] ...
	I1212 20:08:43.502366  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:08:43.540866  244825 logs.go:123] Gathering logs for kube-scheduler [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08] ...
	I1212 20:08:43.540900  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:08:46.111532  244825 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 20:08:46.112064  244825 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 20:08:46.112123  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:08:46.112165  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:08:46.155917  244825 cri.go:89] found id: "ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:08:46.155941  244825 cri.go:89] found id: ""
	I1212 20:08:46.155951  244825 logs.go:282] 1 containers: [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c]
	I1212 20:08:46.156002  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:46.160472  244825 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:08:46.160544  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:08:46.197687  244825 cri.go:89] found id: ""
	I1212 20:08:46.197716  244825 logs.go:282] 0 containers: []
	W1212 20:08:46.197727  244825 logs.go:284] No container was found matching "etcd"
	I1212 20:08:46.197735  244825 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:08:46.197786  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:08:46.238516  244825 cri.go:89] found id: ""
	I1212 20:08:46.238542  244825 logs.go:282] 0 containers: []
	W1212 20:08:46.238552  244825 logs.go:284] No container was found matching "coredns"
	I1212 20:08:46.238560  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:08:46.238609  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:08:46.273243  244825 cri.go:89] found id: "8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:08:46.273288  244825 cri.go:89] found id: ""
	I1212 20:08:46.273301  244825 logs.go:282] 1 containers: [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08]
	I1212 20:08:46.273347  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:46.277248  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:08:46.277342  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:08:46.309983  244825 cri.go:89] found id: ""
	I1212 20:08:46.310005  244825 logs.go:282] 0 containers: []
	W1212 20:08:46.310015  244825 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:46.310023  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:08:46.310070  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:08:46.346164  244825 cri.go:89] found id: "509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:08:46.346184  244825 cri.go:89] found id: ""
	I1212 20:08:46.346194  244825 logs.go:282] 1 containers: [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56]
	I1212 20:08:46.346247  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:46.349786  244825 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:08:46.349844  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:08:46.387217  244825 cri.go:89] found id: ""
	I1212 20:08:46.387246  244825 logs.go:282] 0 containers: []
	W1212 20:08:46.387308  244825 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:46.387319  244825 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:08:46.387380  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:08:46.433829  244825 cri.go:89] found id: ""
	I1212 20:08:46.433856  244825 logs.go:282] 0 containers: []
	W1212 20:08:46.433867  244825 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:08:46.433878  244825 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:46.433900  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:46.550589  244825 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:46.550625  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:46.572398  244825 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:46.572430  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:46.657022  244825 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:46.657046  244825 logs.go:123] Gathering logs for kube-apiserver [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c] ...
	I1212 20:08:46.657062  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:08:46.699346  244825 logs.go:123] Gathering logs for kube-scheduler [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08] ...
	I1212 20:08:46.699374  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:08:46.766841  244825 logs.go:123] Gathering logs for kube-controller-manager [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56] ...
	I1212 20:08:46.766867  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:08:46.800364  244825 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:08:46.800389  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:08:46.846664  244825 logs.go:123] Gathering logs for container status ...
	I1212 20:08:46.846693  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:46.322703  265161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:08:46.336772  265161 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1212 20:08:46.341021  265161 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1212 20:08:46.341051  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (58106148 bytes)
	I1212 20:08:46.454807  265161 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1212 20:08:46.462493  265161 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1212 20:08:46.462535  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (72364216 bytes)
	I1212 20:08:46.658843  265161 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 20:08:46.667840  265161 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1212 20:08:46.682074  265161 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 20:08:46.876362  265161 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1212 20:08:46.890982  265161 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1212 20:08:46.894934  265161 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:08:46.952701  265161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:08:47.027327  265161 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:08:47.054531  265161 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103 for IP: 192.168.85.2
	I1212 20:08:47.054547  265161 certs.go:195] generating shared ca certs ...
	I1212 20:08:47.054561  265161 certs.go:227] acquiring lock for ca certs: {Name:mk2b2b547b64ae18c808f2dedd3d3cb8fa4a59ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:08:47.054731  265161 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-5703/.minikube/ca.key
	I1212 20:08:47.054806  265161 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.key
	I1212 20:08:47.054822  265161 certs.go:257] generating profile certs ...
	I1212 20:08:47.054902  265161 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/client.key
	I1212 20:08:47.054918  265161 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/client.crt with IP's: []
	I1212 20:08:47.083072  265161 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/client.crt ...
	I1212 20:08:47.083099  265161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/client.crt: {Name:mk8ea38dfc959f9ecc1890a3049161ef20ba2f29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:08:47.083268  265161 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/client.key ...
	I1212 20:08:47.083298  265161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/client.key: {Name:mk8ad0f4ecdf0768646879e602d2f79e3b039e63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:08:47.083412  265161 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/apiserver.key.0be4f421
	I1212 20:08:47.083431  265161 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/apiserver.crt.0be4f421 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1212 20:08:47.155746  265161 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/apiserver.crt.0be4f421 ...
	I1212 20:08:47.155780  265161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/apiserver.crt.0be4f421: {Name:mk743ea3a5dfd6f0d3aa9df8c263651cb1815cd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:08:47.155962  265161 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/apiserver.key.0be4f421 ...
	I1212 20:08:47.155982  265161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/apiserver.key.0be4f421: {Name:mk29a2308cae746f0665e9ca087baeb7914e10fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:08:47.156088  265161 certs.go:382] copying /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/apiserver.crt.0be4f421 -> /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/apiserver.crt
	I1212 20:08:47.156181  265161 certs.go:386] copying /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/apiserver.key.0be4f421 -> /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/apiserver.key
	I1212 20:08:47.156261  265161 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/proxy-client.key
	I1212 20:08:47.156300  265161 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/proxy-client.crt with IP's: []
	I1212 20:08:47.365313  265161 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/proxy-client.crt ...
	I1212 20:08:47.365339  265161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/proxy-client.crt: {Name:mkb7989fbda8dd318405e0f57c3a111f48b20a20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:08:47.365533  265161 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/proxy-client.key ...
	I1212 20:08:47.365552  265161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/proxy-client.key: {Name:mk2291c16bfb0af1a719821b1fc28e69e1427237 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:08:47.365794  265161 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/9254.pem (1338 bytes)
	W1212 20:08:47.365851  265161 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-5703/.minikube/certs/9254_empty.pem, impossibly tiny 0 bytes
	I1212 20:08:47.365874  265161 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 20:08:47.365915  265161 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem (1078 bytes)
	I1212 20:08:47.365954  265161 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:08:47.365987  265161 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/key.pem (1679 bytes)
	I1212 20:08:47.366043  265161 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/ssl/certs/92542.pem (1708 bytes)
	I1212 20:08:47.366755  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:08:47.385519  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 20:08:47.402552  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:08:47.418983  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 20:08:47.435569  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 20:08:47.452024  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 20:08:47.468004  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:08:47.484684  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 20:08:47.501341  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/ssl/certs/92542.pem --> /usr/share/ca-certificates/92542.pem (1708 bytes)
	I1212 20:08:47.520406  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:08:47.536939  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/certs/9254.pem --> /usr/share/ca-certificates/9254.pem (1338 bytes)
	I1212 20:08:47.553107  265161 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 20:08:47.564637  265161 ssh_runner.go:195] Run: openssl version
	I1212 20:08:47.570372  265161 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9254.pem
	I1212 20:08:47.577183  265161 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9254.pem /etc/ssl/certs/9254.pem
	I1212 20:08:47.584307  265161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9254.pem
	I1212 20:08:47.587941  265161 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:38 /usr/share/ca-certificates/9254.pem
	I1212 20:08:47.587990  265161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9254.pem
	I1212 20:08:47.633169  265161 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 20:08:47.641181  265161 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9254.pem /etc/ssl/certs/51391683.0
	I1212 20:08:47.650704  265161 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/92542.pem
	I1212 20:08:47.658818  265161 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/92542.pem /etc/ssl/certs/92542.pem
	I1212 20:08:47.666719  265161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92542.pem
	I1212 20:08:47.670595  265161 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:38 /usr/share/ca-certificates/92542.pem
	I1212 20:08:47.670650  265161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92542.pem
	I1212 20:08:47.712583  265161 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 20:08:47.719823  265161 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/92542.pem /etc/ssl/certs/3ec20f2e.0
	I1212 20:08:47.727948  265161 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:08:47.735756  265161 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 20:08:47.743568  265161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:08:47.747384  265161 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:08:47.747431  265161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:08:47.793199  265161 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 20:08:47.801052  265161 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 20:08:47.808336  265161 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:08:47.812407  265161 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 20:08:47.812464  265161 kubeadm.go:401] StartCluster: {Name:no-preload-753103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-753103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:08:47.812537  265161 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:08:47.812585  265161 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:08:47.840282  265161 cri.go:89] found id: ""
	I1212 20:08:47.840347  265161 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 20:08:47.849211  265161 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 20:08:47.857022  265161 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 20:08:47.857093  265161 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:08:47.864606  265161 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 20:08:47.864625  265161 kubeadm.go:158] found existing configuration files:
	
	I1212 20:08:47.864677  265161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 20:08:47.872767  265161 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 20:08:47.872822  265161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 20:08:47.880081  265161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 20:08:47.887357  265161 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 20:08:47.887403  265161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:08:47.894342  265161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 20:08:47.901820  265161 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 20:08:47.901870  265161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:08:47.909076  265161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 20:08:47.916480  265161 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 20:08:47.916514  265161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:08:47.923681  265161 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 20:08:47.957036  265161 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 20:08:47.957119  265161 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 20:08:48.021197  265161 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 20:08:48.021309  265161 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1212 20:08:48.021359  265161 kubeadm.go:319] OS: Linux
	I1212 20:08:48.021444  265161 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 20:08:48.021532  265161 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 20:08:48.021625  265161 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 20:08:48.021719  265161 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 20:08:48.021800  265161 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 20:08:48.021870  265161 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 20:08:48.021941  265161 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 20:08:48.022002  265161 kubeadm.go:319] CGROUPS_IO: enabled
	I1212 20:08:48.084129  265161 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 20:08:48.084319  265161 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 20:08:48.084457  265161 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 20:08:48.100204  265161 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 20:08:44.090376  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:44.590476  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:45.090121  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:45.590003  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:46.089754  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:46.590598  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:47.090522  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:47.590422  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:48.090489  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:48.590162  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:48.101951  265161 out.go:252]   - Generating certificates and keys ...
	I1212 20:08:48.102051  265161 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 20:08:48.102147  265161 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 20:08:48.144031  265161 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 20:08:48.221185  265161 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 20:08:48.349128  265161 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 20:08:48.418221  265161 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 20:08:48.451995  265161 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 20:08:48.452184  265161 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-753103] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1212 20:08:48.497385  265161 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 20:08:48.497527  265161 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-753103] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1212 20:08:48.570531  265161 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 20:08:48.688867  265161 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 20:08:48.774170  265161 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 20:08:48.774230  265161 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 20:08:48.787187  265161 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 20:08:48.802079  265161 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 20:08:48.863835  265161 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 20:08:48.947843  265161 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 20:08:49.110750  265161 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 20:08:49.111507  265161 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 20:08:49.117250  265161 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 20:08:44.498353  245478 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 20:08:44.498830  245478 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1212 20:08:44.498891  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:08:44.498952  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:08:44.527059  245478 cri.go:89] found id: "ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:08:44.527080  245478 cri.go:89] found id: ""
	I1212 20:08:44.527090  245478 logs.go:282] 1 containers: [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998]
	I1212 20:08:44.527140  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:44.530986  245478 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:08:44.531051  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:08:44.559144  245478 cri.go:89] found id: ""
	I1212 20:08:44.559171  245478 logs.go:282] 0 containers: []
	W1212 20:08:44.559182  245478 logs.go:284] No container was found matching "etcd"
	I1212 20:08:44.559189  245478 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:08:44.559247  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:08:44.590049  245478 cri.go:89] found id: ""
	I1212 20:08:44.590084  245478 logs.go:282] 0 containers: []
	W1212 20:08:44.590095  245478 logs.go:284] No container was found matching "coredns"
	I1212 20:08:44.590104  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:08:44.590160  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:08:44.622244  245478 cri.go:89] found id: "02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:08:44.622263  245478 cri.go:89] found id: ""
	I1212 20:08:44.622282  245478 logs.go:282] 1 containers: [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849]
	I1212 20:08:44.622339  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:44.627557  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:08:44.627626  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:08:44.660195  245478 cri.go:89] found id: ""
	I1212 20:08:44.660224  245478 logs.go:282] 0 containers: []
	W1212 20:08:44.660235  245478 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:44.660242  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:08:44.660319  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:08:44.693117  245478 cri.go:89] found id: "4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:08:44.693139  245478 cri.go:89] found id: ""
	I1212 20:08:44.693150  245478 logs.go:282] 1 containers: [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5]
	I1212 20:08:44.693210  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:44.697292  245478 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:08:44.697351  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:08:44.723617  245478 cri.go:89] found id: ""
	I1212 20:08:44.723643  245478 logs.go:282] 0 containers: []
	W1212 20:08:44.723655  245478 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:44.723663  245478 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:08:44.723706  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:08:44.749049  245478 cri.go:89] found id: ""
	I1212 20:08:44.749072  245478 logs.go:282] 0 containers: []
	W1212 20:08:44.749082  245478 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:08:44.749093  245478 logs.go:123] Gathering logs for kube-scheduler [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849] ...
	I1212 20:08:44.749108  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:08:44.777101  245478 logs.go:123] Gathering logs for kube-controller-manager [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5] ...
	I1212 20:08:44.777131  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:08:44.806670  245478 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:08:44.806702  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:08:44.858042  245478 logs.go:123] Gathering logs for container status ...
	I1212 20:08:44.858078  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:44.887913  245478 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:44.887936  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:44.972043  245478 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:44.972078  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:44.987485  245478 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:44.987514  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:45.050138  245478 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:45.050163  245478 logs.go:123] Gathering logs for kube-apiserver [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998] ...
	I1212 20:08:45.050177  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:08:47.584342  245478 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 20:08:47.584685  245478 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1212 20:08:47.584743  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:08:47.584789  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:08:47.616424  245478 cri.go:89] found id: "ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:08:47.616450  245478 cri.go:89] found id: ""
	I1212 20:08:47.616461  245478 logs.go:282] 1 containers: [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998]
	I1212 20:08:47.616520  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:47.620238  245478 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:08:47.620324  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:08:47.649702  245478 cri.go:89] found id: ""
	I1212 20:08:47.649726  245478 logs.go:282] 0 containers: []
	W1212 20:08:47.649735  245478 logs.go:284] No container was found matching "etcd"
	I1212 20:08:47.649742  245478 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:08:47.649794  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:08:47.676235  245478 cri.go:89] found id: ""
	I1212 20:08:47.676260  245478 logs.go:282] 0 containers: []
	W1212 20:08:47.676298  245478 logs.go:284] No container was found matching "coredns"
	I1212 20:08:47.676312  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:08:47.676359  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:08:47.701927  245478 cri.go:89] found id: "02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:08:47.701948  245478 cri.go:89] found id: ""
	I1212 20:08:47.701956  245478 logs.go:282] 1 containers: [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849]
	I1212 20:08:47.701998  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:47.705734  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:08:47.705799  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:08:47.731929  245478 cri.go:89] found id: ""
	I1212 20:08:47.731951  245478 logs.go:282] 0 containers: []
	W1212 20:08:47.731960  245478 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:47.731967  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:08:47.732014  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:08:47.758125  245478 cri.go:89] found id: "4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:08:47.758145  245478 cri.go:89] found id: ""
	I1212 20:08:47.758154  245478 logs.go:282] 1 containers: [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5]
	I1212 20:08:47.758223  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:47.761929  245478 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:08:47.761984  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:08:47.790519  245478 cri.go:89] found id: ""
	I1212 20:08:47.790543  245478 logs.go:282] 0 containers: []
	W1212 20:08:47.790553  245478 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:47.790560  245478 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:08:47.790612  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:08:47.819270  245478 cri.go:89] found id: ""
	I1212 20:08:47.819317  245478 logs.go:282] 0 containers: []
	W1212 20:08:47.819325  245478 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:08:47.819334  245478 logs.go:123] Gathering logs for container status ...
	I1212 20:08:47.819347  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:47.850165  245478 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:47.850195  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:47.935473  245478 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:47.935498  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:47.949744  245478 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:47.949770  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:48.009754  245478 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:48.009776  245478 logs.go:123] Gathering logs for kube-apiserver [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998] ...
	I1212 20:08:48.009798  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:08:48.044549  245478 logs.go:123] Gathering logs for kube-scheduler [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849] ...
	I1212 20:08:48.044583  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:08:48.073338  245478 logs.go:123] Gathering logs for kube-controller-manager [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5] ...
	I1212 20:08:48.073371  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:08:48.102944  245478 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:08:48.102973  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:08:49.120379  265161 out.go:252]   - Booting up control plane ...
	I1212 20:08:49.120511  265161 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 20:08:49.120611  265161 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 20:08:49.120695  265161 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 20:08:49.132948  265161 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:08:49.133115  265161 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 20:08:49.140292  265161 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 20:08:49.140508  265161 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:08:49.140573  265161 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 20:08:49.239455  265161 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 20:08:49.239583  265161 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 20:08:49.741164  265161 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.848408ms
	I1212 20:08:49.745735  265161 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1212 20:08:49.745892  265161 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1212 20:08:49.746036  265161 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1212 20:08:49.746149  265161 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1212 20:08:50.751726  265161 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.005922388s
	I1212 20:08:49.385501  244825 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 20:08:49.385911  244825 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 20:08:49.385966  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:08:49.386025  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:08:49.421491  244825 cri.go:89] found id: "ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:08:49.421514  244825 cri.go:89] found id: ""
	I1212 20:08:49.421523  244825 logs.go:282] 1 containers: [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c]
	I1212 20:08:49.421577  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:49.425133  244825 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:08:49.425192  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:08:49.458069  244825 cri.go:89] found id: ""
	I1212 20:08:49.458092  244825 logs.go:282] 0 containers: []
	W1212 20:08:49.458099  244825 logs.go:284] No container was found matching "etcd"
	I1212 20:08:49.458104  244825 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:08:49.458146  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:08:49.492481  244825 cri.go:89] found id: ""
	I1212 20:08:49.492507  244825 logs.go:282] 0 containers: []
	W1212 20:08:49.492517  244825 logs.go:284] No container was found matching "coredns"
	I1212 20:08:49.492525  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:08:49.492577  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:08:49.534555  244825 cri.go:89] found id: "8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:08:49.534578  244825 cri.go:89] found id: ""
	I1212 20:08:49.534588  244825 logs.go:282] 1 containers: [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08]
	I1212 20:08:49.534638  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:49.538211  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:08:49.538268  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:08:49.570252  244825 cri.go:89] found id: ""
	I1212 20:08:49.570296  244825 logs.go:282] 0 containers: []
	W1212 20:08:49.570307  244825 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:49.570315  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:08:49.570354  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:08:49.603802  244825 cri.go:89] found id: "509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:08:49.603831  244825 cri.go:89] found id: ""
	I1212 20:08:49.603842  244825 logs.go:282] 1 containers: [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56]
	I1212 20:08:49.603897  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:49.607634  244825 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:08:49.607692  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:08:49.642107  244825 cri.go:89] found id: ""
	I1212 20:08:49.642134  244825 logs.go:282] 0 containers: []
	W1212 20:08:49.642145  244825 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:49.642153  244825 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:08:49.642206  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:08:49.677551  244825 cri.go:89] found id: ""
	I1212 20:08:49.677571  244825 logs.go:282] 0 containers: []
	W1212 20:08:49.677578  244825 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:08:49.677587  244825 logs.go:123] Gathering logs for container status ...
	I1212 20:08:49.677603  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:49.713703  244825 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:49.713726  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:49.801630  244825 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:49.801652  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:49.816335  244825 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:49.816356  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:49.872785  244825 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:49.872807  244825 logs.go:123] Gathering logs for kube-apiserver [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c] ...
	I1212 20:08:49.872818  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:08:49.908696  244825 logs.go:123] Gathering logs for kube-scheduler [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08] ...
	I1212 20:08:49.908722  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:08:49.982729  244825 logs.go:123] Gathering logs for kube-controller-manager [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56] ...
	I1212 20:08:49.982760  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:08:50.026341  244825 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:08:50.026385  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:08:52.592829  244825 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 20:08:52.593240  244825 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 20:08:52.593311  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:08:52.593363  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:08:52.629141  244825 cri.go:89] found id: "ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:08:52.629160  244825 cri.go:89] found id: ""
	I1212 20:08:52.629168  244825 logs.go:282] 1 containers: [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c]
	I1212 20:08:52.629213  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:52.633496  244825 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:08:52.633559  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:08:52.672394  244825 cri.go:89] found id: ""
	I1212 20:08:52.672420  244825 logs.go:282] 0 containers: []
	W1212 20:08:52.672432  244825 logs.go:284] No container was found matching "etcd"
	I1212 20:08:52.672440  244825 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:08:52.672489  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:08:52.706611  244825 cri.go:89] found id: ""
	I1212 20:08:52.706631  244825 logs.go:282] 0 containers: []
	W1212 20:08:52.706638  244825 logs.go:284] No container was found matching "coredns"
	I1212 20:08:52.706645  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:08:52.706697  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:08:52.741760  244825 cri.go:89] found id: "8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:08:52.741779  244825 cri.go:89] found id: ""
	I1212 20:08:52.741797  244825 logs.go:282] 1 containers: [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08]
	I1212 20:08:52.741843  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:52.745525  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:08:52.745582  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:08:52.779744  244825 cri.go:89] found id: ""
	I1212 20:08:52.779764  244825 logs.go:282] 0 containers: []
	W1212 20:08:52.779772  244825 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:52.779778  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:08:52.779830  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:08:52.814544  244825 cri.go:89] found id: "509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:08:52.814567  244825 cri.go:89] found id: ""
	I1212 20:08:52.814577  244825 logs.go:282] 1 containers: [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56]
	I1212 20:08:52.814635  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:52.818422  244825 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:08:52.818483  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:08:52.852517  244825 cri.go:89] found id: ""
	I1212 20:08:52.852542  244825 logs.go:282] 0 containers: []
	W1212 20:08:52.852552  244825 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:52.852560  244825 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:08:52.852627  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:08:52.889641  244825 cri.go:89] found id: ""
	I1212 20:08:52.889667  244825 logs.go:282] 0 containers: []
	W1212 20:08:52.889679  244825 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:08:52.889690  244825 logs.go:123] Gathering logs for container status ...
	I1212 20:08:52.889705  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:52.928168  244825 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:52.928195  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:53.025654  244825 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:53.025682  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:53.040993  244825 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:53.041016  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 20:08:49.090670  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:49.590513  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:50.090507  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:50.590494  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:51.090480  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:51.589774  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:52.090437  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:52.590330  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:53.090294  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:53.589908  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:53.663970  260486 kubeadm.go:1114] duration metric: took 11.650606291s to wait for elevateKubeSystemPrivileges
	I1212 20:08:53.664010  260486 kubeadm.go:403] duration metric: took 21.803106362s to StartCluster
	I1212 20:08:53.664043  260486 settings.go:142] acquiring lock: {Name:mk7b47e673a4fe47e1d03b804f21c4b8c19bdcb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:08:53.664120  260486 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 20:08:53.665100  260486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/kubeconfig: {Name:mkf945a9079d724e4b1deff5cd122f8b2719dc1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:08:53.665353  260486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 20:08:53.665352  260486 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:08:53.665370  260486 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 20:08:53.665448  260486 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-824670"
	I1212 20:08:53.665562  260486 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-824670"
	I1212 20:08:53.665581  260486 config.go:182] Loaded profile config "old-k8s-version-824670": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1212 20:08:53.665459  260486 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-824670"
	I1212 20:08:53.665610  260486 host.go:66] Checking if "old-k8s-version-824670" exists ...
	I1212 20:08:53.665623  260486 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-824670"
	I1212 20:08:53.666080  260486 cli_runner.go:164] Run: docker container inspect old-k8s-version-824670 --format={{.State.Status}}
	I1212 20:08:53.666300  260486 cli_runner.go:164] Run: docker container inspect old-k8s-version-824670 --format={{.State.Status}}
	I1212 20:08:53.671336  260486 out.go:179] * Verifying Kubernetes components...
	I1212 20:08:53.672651  260486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:08:53.688751  260486 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:08:51.359902  265161 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.614053888s
	I1212 20:08:53.747155  265161 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001354922s
	I1212 20:08:53.767113  265161 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 20:08:53.779336  265161 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 20:08:53.791775  265161 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 20:08:53.792108  265161 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-753103 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 20:08:53.804380  265161 kubeadm.go:319] [bootstrap-token] Using token: ll5dd3.f0k4t0l7ykcnbls2
	I1212 20:08:53.690070  260486 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:08:53.690093  260486 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 20:08:53.690157  260486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-824670
	I1212 20:08:53.690890  260486 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-824670"
	I1212 20:08:53.690933  260486 host.go:66] Checking if "old-k8s-version-824670" exists ...
	I1212 20:08:53.691454  260486 cli_runner.go:164] Run: docker container inspect old-k8s-version-824670 --format={{.State.Status}}
	I1212 20:08:53.713972  260486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33059 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/old-k8s-version-824670/id_rsa Username:docker}
	I1212 20:08:53.715263  260486 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 20:08:53.715342  260486 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 20:08:53.715498  260486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-824670
	I1212 20:08:53.742805  260486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33059 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/old-k8s-version-824670/id_rsa Username:docker}
	I1212 20:08:53.773409  260486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 20:08:53.833678  260486 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:08:53.863237  260486 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:08:53.867547  260486 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:08:50.682339  245478 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 20:08:50.682741  245478 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1212 20:08:50.682802  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:08:50.682867  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:08:50.713608  245478 cri.go:89] found id: "ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:08:50.713629  245478 cri.go:89] found id: ""
	I1212 20:08:50.713639  245478 logs.go:282] 1 containers: [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998]
	I1212 20:08:50.713696  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:50.717785  245478 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:08:50.717852  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:08:50.748187  245478 cri.go:89] found id: ""
	I1212 20:08:50.748210  245478 logs.go:282] 0 containers: []
	W1212 20:08:50.748220  245478 logs.go:284] No container was found matching "etcd"
	I1212 20:08:50.748226  245478 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:08:50.748286  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:08:50.781753  245478 cri.go:89] found id: ""
	I1212 20:08:50.781866  245478 logs.go:282] 0 containers: []
	W1212 20:08:50.781886  245478 logs.go:284] No container was found matching "coredns"
	I1212 20:08:50.781896  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:08:50.781955  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:08:50.809701  245478 cri.go:89] found id: "02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:08:50.809718  245478 cri.go:89] found id: ""
	I1212 20:08:50.809725  245478 logs.go:282] 1 containers: [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849]
	I1212 20:08:50.809771  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:50.814048  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:08:50.814105  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:08:50.843358  245478 cri.go:89] found id: ""
	I1212 20:08:50.843382  245478 logs.go:282] 0 containers: []
	W1212 20:08:50.843392  245478 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:50.843399  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:08:50.843460  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:08:50.875400  245478 cri.go:89] found id: "4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:08:50.875424  245478 cri.go:89] found id: ""
	I1212 20:08:50.875435  245478 logs.go:282] 1 containers: [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5]
	I1212 20:08:50.875489  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:50.879408  245478 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:08:50.879465  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:08:50.911038  245478 cri.go:89] found id: ""
	I1212 20:08:50.911066  245478 logs.go:282] 0 containers: []
	W1212 20:08:50.911096  245478 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:50.911109  245478 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:08:50.911167  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:08:50.939706  245478 cri.go:89] found id: ""
	I1212 20:08:50.939730  245478 logs.go:282] 0 containers: []
	W1212 20:08:50.939742  245478 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:08:50.939755  245478 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:08:50.939769  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:08:51.010431  245478 logs.go:123] Gathering logs for container status ...
	I1212 20:08:51.010461  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:51.041775  245478 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:51.041804  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:51.150469  245478 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:51.150505  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:51.167403  245478 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:51.167431  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:51.246376  245478 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:51.246399  245478 logs.go:123] Gathering logs for kube-apiserver [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998] ...
	I1212 20:08:51.246430  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:08:51.284712  245478 logs.go:123] Gathering logs for kube-scheduler [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849] ...
	I1212 20:08:51.284738  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:08:51.319700  245478 logs.go:123] Gathering logs for kube-controller-manager [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5] ...
	I1212 20:08:51.319740  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:08:53.863176  245478 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 20:08:53.863599  245478 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1212 20:08:53.863649  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:08:53.863695  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:08:53.910698  245478 cri.go:89] found id: "ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:08:53.910726  245478 cri.go:89] found id: ""
	I1212 20:08:53.910736  245478 logs.go:282] 1 containers: [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998]
	I1212 20:08:53.910796  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:53.917626  245478 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:08:53.917697  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:08:53.954369  245478 cri.go:89] found id: ""
	I1212 20:08:53.954406  245478 logs.go:282] 0 containers: []
	W1212 20:08:53.954417  245478 logs.go:284] No container was found matching "etcd"
	I1212 20:08:53.954432  245478 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:08:53.954492  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:08:53.986679  245478 cri.go:89] found id: ""
	I1212 20:08:53.986707  245478 logs.go:282] 0 containers: []
	W1212 20:08:53.986718  245478 logs.go:284] No container was found matching "coredns"
	I1212 20:08:53.986745  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:08:53.986838  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:08:54.020608  245478 cri.go:89] found id: "02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:08:54.020637  245478 cri.go:89] found id: ""
	I1212 20:08:54.020649  245478 logs.go:282] 1 containers: [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849]
	I1212 20:08:54.020714  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:54.025625  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:08:54.025689  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:08:54.056667  245478 cri.go:89] found id: ""
	I1212 20:08:54.056693  245478 logs.go:282] 0 containers: []
	W1212 20:08:54.056703  245478 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:54.056711  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:08:54.056776  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:08:54.093334  245478 cri.go:89] found id: "4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:08:54.093356  245478 cri.go:89] found id: ""
	I1212 20:08:54.093367  245478 logs.go:282] 1 containers: [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5]
	I1212 20:08:54.093424  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:54.098710  245478 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:08:54.098771  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:08:54.132106  245478 cri.go:89] found id: ""
	I1212 20:08:54.132132  245478 logs.go:282] 0 containers: []
	W1212 20:08:54.132143  245478 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:54.132151  245478 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:08:54.132207  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:08:54.166292  245478 cri.go:89] found id: ""
	I1212 20:08:54.166319  245478 logs.go:282] 0 containers: []
	W1212 20:08:54.166330  245478 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:08:54.166341  245478 logs.go:123] Gathering logs for kube-scheduler [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849] ...
	I1212 20:08:54.166356  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:08:54.201423  245478 logs.go:123] Gathering logs for kube-controller-manager [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5] ...
	I1212 20:08:54.201456  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:08:54.234622  245478 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:08:54.077955  260486 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1212 20:08:54.079217  260486 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-824670" to be "Ready" ...
	I1212 20:08:54.316431  260486 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1212 20:08:53.805419  265161 out.go:252]   - Configuring RBAC rules ...
	I1212 20:08:53.805631  265161 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 20:08:53.811830  265161 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 20:08:53.819035  265161 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 20:08:53.822103  265161 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 20:08:53.826097  265161 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 20:08:53.829811  265161 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 20:08:54.153969  265161 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 20:08:54.569794  265161 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1212 20:08:55.155654  265161 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1212 20:08:55.156494  265161 kubeadm.go:319] 
	I1212 20:08:55.156614  265161 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1212 20:08:55.156633  265161 kubeadm.go:319] 
	I1212 20:08:55.156758  265161 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1212 20:08:55.156767  265161 kubeadm.go:319] 
	I1212 20:08:55.156801  265161 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1212 20:08:55.156896  265161 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 20:08:55.156973  265161 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 20:08:55.156983  265161 kubeadm.go:319] 
	I1212 20:08:55.157106  265161 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1212 20:08:55.157126  265161 kubeadm.go:319] 
	I1212 20:08:55.157188  265161 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 20:08:55.157194  265161 kubeadm.go:319] 
	I1212 20:08:55.157261  265161 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1212 20:08:55.157379  265161 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 20:08:55.157462  265161 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 20:08:55.157468  265161 kubeadm.go:319] 
	I1212 20:08:55.157573  265161 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 20:08:55.157683  265161 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1212 20:08:55.157689  265161 kubeadm.go:319] 
	I1212 20:08:55.157787  265161 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ll5dd3.f0k4t0l7ykcnbls2 \
	I1212 20:08:55.157935  265161 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c49fb95e06ac971f3e9f3fbce39707c55b5d19bac31f7f0c0750449d5e9f38c \
	I1212 20:08:55.157966  265161 kubeadm.go:319] 	--control-plane 
	I1212 20:08:55.157972  265161 kubeadm.go:319] 
	I1212 20:08:55.158103  265161 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1212 20:08:55.158108  265161 kubeadm.go:319] 
	I1212 20:08:55.158175  265161 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ll5dd3.f0k4t0l7ykcnbls2 \
	I1212 20:08:55.158267  265161 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c49fb95e06ac971f3e9f3fbce39707c55b5d19bac31f7f0c0750449d5e9f38c 
	I1212 20:08:55.161712  265161 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1212 20:08:55.161921  265161 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 20:08:55.161959  265161 cni.go:84] Creating CNI manager for ""
	I1212 20:08:55.161972  265161 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:08:55.163814  265161 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1212 20:08:55.164923  265161 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 20:08:55.170778  265161 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1212 20:08:55.170805  265161 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1212 20:08:55.189074  265161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 20:08:55.456759  265161 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 20:08:55.456832  265161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:55.456934  265161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-753103 minikube.k8s.io/updated_at=2025_12_12T20_08_55_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300 minikube.k8s.io/name=no-preload-753103 minikube.k8s.io/primary=true
	I1212 20:08:55.550689  265161 ops.go:34] apiserver oom_adj: -16
	I1212 20:08:55.550908  265161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:56.051655  265161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1212 20:08:53.100047  244825 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:53.100070  244825 logs.go:123] Gathering logs for kube-apiserver [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c] ...
	I1212 20:08:53.100086  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:08:53.139859  244825 logs.go:123] Gathering logs for kube-scheduler [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08] ...
	I1212 20:08:53.139890  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:08:53.214334  244825 logs.go:123] Gathering logs for kube-controller-manager [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56] ...
	I1212 20:08:53.214363  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:08:53.249914  244825 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:08:53.249942  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:08:55.795831  244825 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 20:08:55.796300  244825 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 20:08:55.796349  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:08:55.796410  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:08:55.837503  244825 cri.go:89] found id: "ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:08:55.837526  244825 cri.go:89] found id: ""
	I1212 20:08:55.837538  244825 logs.go:282] 1 containers: [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c]
	I1212 20:08:55.837605  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:55.842734  244825 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:08:55.842794  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:08:55.888479  244825 cri.go:89] found id: ""
	I1212 20:08:55.888503  244825 logs.go:282] 0 containers: []
	W1212 20:08:55.888515  244825 logs.go:284] No container was found matching "etcd"
	I1212 20:08:55.888524  244825 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:08:55.888583  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:08:55.935841  244825 cri.go:89] found id: ""
	I1212 20:08:55.935869  244825 logs.go:282] 0 containers: []
	W1212 20:08:55.935879  244825 logs.go:284] No container was found matching "coredns"
	I1212 20:08:55.935886  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:08:55.935940  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:08:55.976004  244825 cri.go:89] found id: "8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:08:55.976267  244825 cri.go:89] found id: ""
	I1212 20:08:55.976298  244825 logs.go:282] 1 containers: [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08]
	I1212 20:08:55.976357  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:55.981297  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:08:55.981367  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:08:56.019615  244825 cri.go:89] found id: ""
	I1212 20:08:56.019635  244825 logs.go:282] 0 containers: []
	W1212 20:08:56.019644  244825 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:56.019650  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:08:56.019702  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:08:56.058661  244825 cri.go:89] found id: "509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:08:56.058679  244825 cri.go:89] found id: ""
	I1212 20:08:56.058687  244825 logs.go:282] 1 containers: [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56]
	I1212 20:08:56.058729  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:56.062946  244825 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:08:56.063004  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:08:56.103948  244825 cri.go:89] found id: ""
	I1212 20:08:56.103974  244825 logs.go:282] 0 containers: []
	W1212 20:08:56.103985  244825 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:56.103992  244825 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:08:56.104049  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:08:56.142834  244825 cri.go:89] found id: ""
	I1212 20:08:56.142859  244825 logs.go:282] 0 containers: []
	W1212 20:08:56.142870  244825 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:08:56.142880  244825 logs.go:123] Gathering logs for kube-apiserver [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c] ...
	I1212 20:08:56.142898  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:08:56.180554  244825 logs.go:123] Gathering logs for kube-scheduler [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08] ...
	I1212 20:08:56.180587  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:08:56.251075  244825 logs.go:123] Gathering logs for kube-controller-manager [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56] ...
	I1212 20:08:56.251103  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:08:56.286036  244825 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:08:56.286064  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:08:56.333119  244825 logs.go:123] Gathering logs for container status ...
	I1212 20:08:56.333147  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:56.369513  244825 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:56.369536  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:56.462099  244825 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:56.462131  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:56.477910  244825 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:56.477936  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:56.536389  244825 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:54.317775  260486 addons.go:530] duration metric: took 652.399388ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1212 20:08:54.583080  260486 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-824670" context rescaled to 1 replicas
	W1212 20:08:56.083264  260486 node_ready.go:57] node "old-k8s-version-824670" has "Ready":"False" status (will retry)
	W1212 20:08:58.583322  260486 node_ready.go:57] node "old-k8s-version-824670" has "Ready":"False" status (will retry)
	I1212 20:08:54.236223  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:08:54.307681  245478 logs.go:123] Gathering logs for container status ...
	I1212 20:08:54.307732  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:54.346919  245478 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:54.346959  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:54.464774  245478 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:54.464805  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:54.478843  245478 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:54.478866  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:54.536619  245478 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:54.536641  245478 logs.go:123] Gathering logs for kube-apiserver [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998] ...
	I1212 20:08:54.536656  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:08:57.073060  245478 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 20:08:57.073477  245478 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1212 20:08:57.073526  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:08:57.073568  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:08:57.101256  245478 cri.go:89] found id: "ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:08:57.101301  245478 cri.go:89] found id: ""
	I1212 20:08:57.101313  245478 logs.go:282] 1 containers: [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998]
	I1212 20:08:57.101358  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:57.106018  245478 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:08:57.106078  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:08:57.136416  245478 cri.go:89] found id: ""
	I1212 20:08:57.136441  245478 logs.go:282] 0 containers: []
	W1212 20:08:57.136452  245478 logs.go:284] No container was found matching "etcd"
	I1212 20:08:57.136461  245478 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:08:57.136525  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:08:57.161989  245478 cri.go:89] found id: ""
	I1212 20:08:57.162013  245478 logs.go:282] 0 containers: []
	W1212 20:08:57.162021  245478 logs.go:284] No container was found matching "coredns"
	I1212 20:08:57.162029  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:08:57.162085  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:08:57.187899  245478 cri.go:89] found id: "02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:08:57.187921  245478 cri.go:89] found id: ""
	I1212 20:08:57.187930  245478 logs.go:282] 1 containers: [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849]
	I1212 20:08:57.187980  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:57.191712  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:08:57.191779  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:08:57.216675  245478 cri.go:89] found id: ""
	I1212 20:08:57.216697  245478 logs.go:282] 0 containers: []
	W1212 20:08:57.216707  245478 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:57.216713  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:08:57.216766  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:08:57.241743  245478 cri.go:89] found id: "4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:08:57.241766  245478 cri.go:89] found id: ""
	I1212 20:08:57.241774  245478 logs.go:282] 1 containers: [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5]
	I1212 20:08:57.241832  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:57.245476  245478 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:08:57.245532  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:08:57.268665  245478 cri.go:89] found id: ""
	I1212 20:08:57.268682  245478 logs.go:282] 0 containers: []
	W1212 20:08:57.268689  245478 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:57.268694  245478 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:08:57.268729  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:08:57.291927  245478 cri.go:89] found id: ""
	I1212 20:08:57.291951  245478 logs.go:282] 0 containers: []
	W1212 20:08:57.291961  245478 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:08:57.291973  245478 logs.go:123] Gathering logs for container status ...
	I1212 20:08:57.291986  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:57.320628  245478 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:57.320654  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:57.402141  245478 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:57.402169  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:57.415875  245478 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:57.415896  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:57.469540  245478 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:57.469564  245478 logs.go:123] Gathering logs for kube-apiserver [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998] ...
	I1212 20:08:57.469582  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:08:57.499342  245478 logs.go:123] Gathering logs for kube-scheduler [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849] ...
	I1212 20:08:57.499367  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:08:57.523890  245478 logs.go:123] Gathering logs for kube-controller-manager [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5] ...
	I1212 20:08:57.523918  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:08:57.548713  245478 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:08:57.548738  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:08:56.551295  265161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:57.051766  265161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:57.550979  265161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:58.051576  265161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:58.551315  265161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:59.051482  265161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:59.551500  265161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:09:00.051915  265161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:09:00.118198  265161 kubeadm.go:1114] duration metric: took 4.661436508s to wait for elevateKubeSystemPrivileges
	I1212 20:09:00.118233  265161 kubeadm.go:403] duration metric: took 12.305771851s to StartCluster
	I1212 20:09:00.118253  265161 settings.go:142] acquiring lock: {Name:mk7b47e673a4fe47e1d03b804f21c4b8c19bdcb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:09:00.118351  265161 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 20:09:00.119631  265161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/kubeconfig: {Name:mkf945a9079d724e4b1deff5cd122f8b2719dc1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:09:00.119863  265161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 20:09:00.119873  265161 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:09:00.119939  265161 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 20:09:00.120068  265161 addons.go:70] Setting storage-provisioner=true in profile "no-preload-753103"
	I1212 20:09:00.120089  265161 addons.go:239] Setting addon storage-provisioner=true in "no-preload-753103"
	I1212 20:09:00.120089  265161 config.go:182] Loaded profile config "no-preload-753103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:09:00.120096  265161 addons.go:70] Setting default-storageclass=true in profile "no-preload-753103"
	I1212 20:09:00.120124  265161 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-753103"
	I1212 20:09:00.120138  265161 host.go:66] Checking if "no-preload-753103" exists ...
	I1212 20:09:00.120553  265161 cli_runner.go:164] Run: docker container inspect no-preload-753103 --format={{.State.Status}}
	I1212 20:09:00.120733  265161 cli_runner.go:164] Run: docker container inspect no-preload-753103 --format={{.State.Status}}
	I1212 20:09:00.121393  265161 out.go:179] * Verifying Kubernetes components...
	I1212 20:09:00.122791  265161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:09:00.147916  265161 addons.go:239] Setting addon default-storageclass=true in "no-preload-753103"
	I1212 20:09:00.147976  265161 host.go:66] Checking if "no-preload-753103" exists ...
	I1212 20:09:00.148527  265161 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:09:00.148620  265161 cli_runner.go:164] Run: docker container inspect no-preload-753103 --format={{.State.Status}}
	I1212 20:09:00.149864  265161 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:09:00.149901  265161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 20:09:00.149954  265161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-753103
	I1212 20:09:00.189446  265161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/no-preload-753103/id_rsa Username:docker}
	I1212 20:09:00.195707  265161 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 20:09:00.195733  265161 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 20:09:00.195789  265161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-753103
	I1212 20:09:00.220877  265161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/no-preload-753103/id_rsa Username:docker}
	I1212 20:09:00.224655  265161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 20:09:00.289097  265161 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:09:00.312693  265161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:09:00.331369  265161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:09:00.415592  265161 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1212 20:09:00.417660  265161 node_ready.go:35] waiting up to 6m0s for node "no-preload-753103" to be "Ready" ...
	I1212 20:09:00.647875  265161 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1212 20:09:00.648868  265161 addons.go:530] duration metric: took 528.937079ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1212 20:09:00.920160  265161 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-753103" context rescaled to 1 replicas
	I1212 20:08:59.037358  244825 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 20:08:59.037781  244825 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 20:08:59.037847  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:08:59.037903  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:08:59.079792  244825 cri.go:89] found id: "ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:08:59.079862  244825 cri.go:89] found id: ""
	I1212 20:08:59.079885  244825 logs.go:282] 1 containers: [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c]
	I1212 20:08:59.079952  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:59.084535  244825 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:08:59.084600  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:08:59.120623  244825 cri.go:89] found id: ""
	I1212 20:08:59.120647  244825 logs.go:282] 0 containers: []
	W1212 20:08:59.120657  244825 logs.go:284] No container was found matching "etcd"
	I1212 20:08:59.120664  244825 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:08:59.120725  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:08:59.166947  244825 cri.go:89] found id: ""
	I1212 20:08:59.166972  244825 logs.go:282] 0 containers: []
	W1212 20:08:59.166980  244825 logs.go:284] No container was found matching "coredns"
	I1212 20:08:59.166987  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:08:59.167051  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:08:59.201085  244825 cri.go:89] found id: "8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:08:59.201107  244825 cri.go:89] found id: ""
	I1212 20:08:59.201114  244825 logs.go:282] 1 containers: [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08]
	I1212 20:08:59.201160  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:59.204775  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:08:59.204825  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:08:59.237433  244825 cri.go:89] found id: ""
	I1212 20:08:59.237456  244825 logs.go:282] 0 containers: []
	W1212 20:08:59.237467  244825 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:59.237475  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:08:59.237523  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:08:59.272242  244825 cri.go:89] found id: "509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:08:59.272260  244825 cri.go:89] found id: ""
	I1212 20:08:59.272267  244825 logs.go:282] 1 containers: [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56]
	I1212 20:08:59.272333  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:59.275883  244825 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:08:59.275951  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:08:59.311291  244825 cri.go:89] found id: ""
	I1212 20:08:59.311316  244825 logs.go:282] 0 containers: []
	W1212 20:08:59.311326  244825 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:59.311338  244825 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:08:59.311407  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:08:59.343420  244825 cri.go:89] found id: ""
	I1212 20:08:59.343447  244825 logs.go:282] 0 containers: []
	W1212 20:08:59.343457  244825 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:08:59.343469  244825 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:59.343482  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:59.360360  244825 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:59.360387  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:59.417068  244825 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:59.417090  244825 logs.go:123] Gathering logs for kube-apiserver [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c] ...
	I1212 20:08:59.417110  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:08:59.454912  244825 logs.go:123] Gathering logs for kube-scheduler [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08] ...
	I1212 20:08:59.454942  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:08:59.520691  244825 logs.go:123] Gathering logs for kube-controller-manager [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56] ...
	I1212 20:08:59.520717  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:08:59.554112  244825 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:08:59.554134  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:08:59.603891  244825 logs.go:123] Gathering logs for container status ...
	I1212 20:08:59.603926  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:59.642900  244825 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:59.642929  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:02.230066  244825 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 20:09:02.230528  244825 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 20:09:02.230588  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:09:02.230652  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:09:02.267435  244825 cri.go:89] found id: "ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:09:02.267456  244825 cri.go:89] found id: ""
	I1212 20:09:02.267465  244825 logs.go:282] 1 containers: [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c]
	I1212 20:09:02.267538  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:09:02.271944  244825 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:09:02.272021  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:09:02.305788  244825 cri.go:89] found id: ""
	I1212 20:09:02.305810  244825 logs.go:282] 0 containers: []
	W1212 20:09:02.305818  244825 logs.go:284] No container was found matching "etcd"
	I1212 20:09:02.305824  244825 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:09:02.305868  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:09:02.341059  244825 cri.go:89] found id: ""
	I1212 20:09:02.341083  244825 logs.go:282] 0 containers: []
	W1212 20:09:02.341094  244825 logs.go:284] No container was found matching "coredns"
	I1212 20:09:02.341102  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:09:02.341152  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:09:02.378325  244825 cri.go:89] found id: "8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:09:02.378348  244825 cri.go:89] found id: ""
	I1212 20:09:02.378356  244825 logs.go:282] 1 containers: [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08]
	I1212 20:09:02.378418  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:09:02.382187  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:09:02.382244  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:09:02.415045  244825 cri.go:89] found id: ""
	I1212 20:09:02.415069  244825 logs.go:282] 0 containers: []
	W1212 20:09:02.415081  244825 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:02.415088  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:09:02.415144  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:09:02.449295  244825 cri.go:89] found id: "509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:09:02.449319  244825 cri.go:89] found id: ""
	I1212 20:09:02.449329  244825 logs.go:282] 1 containers: [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56]
	I1212 20:09:02.449378  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:09:02.453707  244825 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:09:02.453768  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:09:02.490605  244825 cri.go:89] found id: ""
	I1212 20:09:02.490631  244825 logs.go:282] 0 containers: []
	W1212 20:09:02.490642  244825 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:02.490649  244825 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:09:02.490703  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:09:02.526082  244825 cri.go:89] found id: ""
	I1212 20:09:02.526109  244825 logs.go:282] 0 containers: []
	W1212 20:09:02.526121  244825 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:09:02.526133  244825 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:02.526143  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:02.615472  244825 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:02.615509  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:02.631426  244825 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:02.631451  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:02.689589  244825 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:02.689614  244825 logs.go:123] Gathering logs for kube-apiserver [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c] ...
	I1212 20:09:02.689631  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:09:02.726590  244825 logs.go:123] Gathering logs for kube-scheduler [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08] ...
	I1212 20:09:02.726619  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:09:02.793016  244825 logs.go:123] Gathering logs for kube-controller-manager [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56] ...
	I1212 20:09:02.793045  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:09:02.830324  244825 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:09:02.830353  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:09:02.876390  244825 logs.go:123] Gathering logs for container status ...
	I1212 20:09:02.876420  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 20:09:01.082905  260486 node_ready.go:57] node "old-k8s-version-824670" has "Ready":"False" status (will retry)
	W1212 20:09:03.084104  260486 node_ready.go:57] node "old-k8s-version-824670" has "Ready":"False" status (will retry)
	I1212 20:09:00.107828  245478 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 20:09:00.108260  245478 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1212 20:09:00.108343  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:09:00.108396  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:09:00.144384  245478 cri.go:89] found id: "ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:09:00.144409  245478 cri.go:89] found id: ""
	I1212 20:09:00.144419  245478 logs.go:282] 1 containers: [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998]
	I1212 20:09:00.144473  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:09:00.150583  245478 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:09:00.150667  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:09:00.208006  245478 cri.go:89] found id: ""
	I1212 20:09:00.208032  245478 logs.go:282] 0 containers: []
	W1212 20:09:00.208042  245478 logs.go:284] No container was found matching "etcd"
	I1212 20:09:00.208050  245478 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:09:00.208102  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:09:00.245921  245478 cri.go:89] found id: ""
	I1212 20:09:00.245943  245478 logs.go:282] 0 containers: []
	W1212 20:09:00.245953  245478 logs.go:284] No container was found matching "coredns"
	I1212 20:09:00.245961  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:09:00.246014  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:09:00.282487  245478 cri.go:89] found id: "02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:09:00.282510  245478 cri.go:89] found id: ""
	I1212 20:09:00.282520  245478 logs.go:282] 1 containers: [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849]
	I1212 20:09:00.282579  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:09:00.287707  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:09:00.287772  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:09:00.320553  245478 cri.go:89] found id: ""
	I1212 20:09:00.320574  245478 logs.go:282] 0 containers: []
	W1212 20:09:00.320582  245478 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:00.320590  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:09:00.320632  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:09:00.357974  245478 cri.go:89] found id: "4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:09:00.358000  245478 cri.go:89] found id: ""
	I1212 20:09:00.358011  245478 logs.go:282] 1 containers: [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5]
	I1212 20:09:00.358068  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:09:00.363628  245478 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:09:00.363739  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:09:00.399218  245478 cri.go:89] found id: ""
	I1212 20:09:00.399241  245478 logs.go:282] 0 containers: []
	W1212 20:09:00.399249  245478 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:00.399254  245478 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:09:00.399315  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:09:00.435949  245478 cri.go:89] found id: ""
	I1212 20:09:00.435973  245478 logs.go:282] 0 containers: []
	W1212 20:09:00.435984  245478 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:09:00.435993  245478 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:09:00.436008  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:09:00.509785  245478 logs.go:123] Gathering logs for container status ...
	I1212 20:09:00.509813  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:00.549799  245478 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:00.549830  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:00.653717  245478 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:00.653743  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:00.668375  245478 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:00.668401  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:00.722555  245478 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:00.722577  245478 logs.go:123] Gathering logs for kube-apiserver [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998] ...
	I1212 20:09:00.722592  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:09:00.752797  245478 logs.go:123] Gathering logs for kube-scheduler [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849] ...
	I1212 20:09:00.752823  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:09:00.779395  245478 logs.go:123] Gathering logs for kube-controller-manager [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5] ...
	I1212 20:09:00.779426  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:09:03.310519  245478 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 20:09:03.310877  245478 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1212 20:09:03.310937  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:09:03.310987  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:09:03.337348  245478 cri.go:89] found id: "ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:09:03.337369  245478 cri.go:89] found id: ""
	I1212 20:09:03.337377  245478 logs.go:282] 1 containers: [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998]
	I1212 20:09:03.337436  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:09:03.341354  245478 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:09:03.341413  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:09:03.366226  245478 cri.go:89] found id: ""
	I1212 20:09:03.366252  245478 logs.go:282] 0 containers: []
	W1212 20:09:03.366262  245478 logs.go:284] No container was found matching "etcd"
	I1212 20:09:03.366284  245478 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:09:03.366347  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:09:03.390931  245478 cri.go:89] found id: ""
	I1212 20:09:03.390952  245478 logs.go:282] 0 containers: []
	W1212 20:09:03.390962  245478 logs.go:284] No container was found matching "coredns"
	I1212 20:09:03.390970  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:09:03.391020  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:09:03.414799  245478 cri.go:89] found id: "02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:09:03.414820  245478 cri.go:89] found id: ""
	I1212 20:09:03.414830  245478 logs.go:282] 1 containers: [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849]
	I1212 20:09:03.414874  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:09:03.418421  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:09:03.418480  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:09:03.443496  245478 cri.go:89] found id: ""
	I1212 20:09:03.443516  245478 logs.go:282] 0 containers: []
	W1212 20:09:03.443524  245478 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:03.443537  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:09:03.443589  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:09:03.469224  245478 cri.go:89] found id: "4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:09:03.469246  245478 cri.go:89] found id: ""
	I1212 20:09:03.469256  245478 logs.go:282] 1 containers: [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5]
	I1212 20:09:03.469340  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:09:03.472971  245478 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:09:03.473017  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:09:03.496711  245478 cri.go:89] found id: ""
	I1212 20:09:03.496731  245478 logs.go:282] 0 containers: []
	W1212 20:09:03.496739  245478 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:03.496745  245478 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:09:03.496802  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:09:03.520325  245478 cri.go:89] found id: ""
	I1212 20:09:03.520349  245478 logs.go:282] 0 containers: []
	W1212 20:09:03.520358  245478 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:09:03.520366  245478 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:03.520380  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:03.533205  245478 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:03.533225  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:03.586183  245478 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:03.586201  245478 logs.go:123] Gathering logs for kube-apiserver [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998] ...
	I1212 20:09:03.586212  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:09:03.614522  245478 logs.go:123] Gathering logs for kube-scheduler [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849] ...
	I1212 20:09:03.614544  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:09:03.639883  245478 logs.go:123] Gathering logs for kube-controller-manager [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5] ...
	I1212 20:09:03.639910  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:09:03.664803  245478 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:09:03.664825  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:09:03.718505  245478 logs.go:123] Gathering logs for container status ...
	I1212 20:09:03.718531  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:03.746292  245478 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:03.746315  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1212 20:09:02.420833  265161 node_ready.go:57] node "no-preload-753103" has "Ready":"False" status (will retry)
	W1212 20:09:04.421150  265161 node_ready.go:57] node "no-preload-753103" has "Ready":"False" status (will retry)
	I1212 20:09:05.414545  244825 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 20:09:05.414935  244825 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 20:09:05.414982  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:09:05.415027  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:09:05.449032  244825 cri.go:89] found id: "ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:09:05.449048  244825 cri.go:89] found id: ""
	I1212 20:09:05.449056  244825 logs.go:282] 1 containers: [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c]
	I1212 20:09:05.449104  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:09:05.452787  244825 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:09:05.452844  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:09:05.486982  244825 cri.go:89] found id: ""
	I1212 20:09:05.487005  244825 logs.go:282] 0 containers: []
	W1212 20:09:05.487015  244825 logs.go:284] No container was found matching "etcd"
	I1212 20:09:05.487023  244825 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:09:05.487074  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:09:05.519711  244825 cri.go:89] found id: ""
	I1212 20:09:05.519734  244825 logs.go:282] 0 containers: []
	W1212 20:09:05.519743  244825 logs.go:284] No container was found matching "coredns"
	I1212 20:09:05.519750  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:09:05.519802  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:09:05.553576  244825 cri.go:89] found id: "8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:09:05.553594  244825 cri.go:89] found id: ""
	I1212 20:09:05.553603  244825 logs.go:282] 1 containers: [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08]
	I1212 20:09:05.553655  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:09:05.557137  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:09:05.557192  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:09:05.589898  244825 cri.go:89] found id: ""
	I1212 20:09:05.589926  244825 logs.go:282] 0 containers: []
	W1212 20:09:05.589933  244825 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:05.589974  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:09:05.590020  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:09:05.622238  244825 cri.go:89] found id: "509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:09:05.622255  244825 cri.go:89] found id: ""
	I1212 20:09:05.622263  244825 logs.go:282] 1 containers: [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56]
	I1212 20:09:05.622323  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:09:05.625635  244825 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:09:05.625692  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:09:05.657993  244825 cri.go:89] found id: ""
	I1212 20:09:05.658016  244825 logs.go:282] 0 containers: []
	W1212 20:09:05.658026  244825 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:05.658034  244825 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:09:05.658077  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:09:05.689973  244825 cri.go:89] found id: ""
	I1212 20:09:05.689993  244825 logs.go:282] 0 containers: []
	W1212 20:09:05.689999  244825 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:09:05.690007  244825 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:09:05.690017  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:09:05.735898  244825 logs.go:123] Gathering logs for container status ...
	I1212 20:09:05.735922  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:05.771304  244825 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:05.771329  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:05.860371  244825 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:05.860400  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:05.876224  244825 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:05.876252  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:05.935378  244825 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:05.935400  244825 logs.go:123] Gathering logs for kube-apiserver [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c] ...
	I1212 20:09:05.935415  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:09:05.973013  244825 logs.go:123] Gathering logs for kube-scheduler [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08] ...
	I1212 20:09:05.973040  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:09:06.040032  244825 logs.go:123] Gathering logs for kube-controller-manager [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56] ...
	I1212 20:09:06.040058  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	W1212 20:09:05.582191  260486 node_ready.go:57] node "old-k8s-version-824670" has "Ready":"False" status (will retry)
	I1212 20:09:07.082443  260486 node_ready.go:49] node "old-k8s-version-824670" is "Ready"
	I1212 20:09:07.082468  260486 node_ready.go:38] duration metric: took 13.003226267s for node "old-k8s-version-824670" to be "Ready" ...
	I1212 20:09:07.082481  260486 api_server.go:52] waiting for apiserver process to appear ...
	I1212 20:09:07.082524  260486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:07.094351  260486 api_server.go:72] duration metric: took 13.428906809s to wait for apiserver process to appear ...
	I1212 20:09:07.094373  260486 api_server.go:88] waiting for apiserver healthz status ...
	I1212 20:09:07.094387  260486 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1212 20:09:07.099476  260486 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1212 20:09:07.100618  260486 api_server.go:141] control plane version: v1.28.0
	I1212 20:09:07.100640  260486 api_server.go:131] duration metric: took 6.262135ms to wait for apiserver health ...
	I1212 20:09:07.100647  260486 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 20:09:07.104672  260486 system_pods.go:59] 8 kube-system pods found
	I1212 20:09:07.104724  260486 system_pods.go:61] "coredns-5dd5756b68-shgbw" [2a42f31d-a757-492d-bd0f-539953154a92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 20:09:07.104738  260486 system_pods.go:61] "etcd-old-k8s-version-824670" [e3c6e799-4dac-4c0c-8063-2574684473bd] Running
	I1212 20:09:07.104752  260486 system_pods.go:61] "kindnet-75qr9" [16750e71-744f-4d14-9c72-513a0ef89bd9] Running
	I1212 20:09:07.104765  260486 system_pods.go:61] "kube-apiserver-old-k8s-version-824670" [d744d324-f28f-4417-bd24-10f31d44d033] Running
	I1212 20:09:07.104771  260486 system_pods.go:61] "kube-controller-manager-old-k8s-version-824670" [a546cec2-5f43-4c0a-b310-07fa485e55c4] Running
	I1212 20:09:07.104775  260486 system_pods.go:61] "kube-proxy-nwrgl" [500e6acc-e453-4e40-81df-5d6db1f0f764] Running
	I1212 20:09:07.104787  260486 system_pods.go:61] "kube-scheduler-old-k8s-version-824670" [87d76929-c951-4faf-8216-7c61d544cadb] Running
	I1212 20:09:07.104798  260486 system_pods.go:61] "storage-provisioner" [c9aec911-e8c8-4ff9-8e8c-2d5e27b5812e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 20:09:07.104805  260486 system_pods.go:74] duration metric: took 4.151469ms to wait for pod list to return data ...
	I1212 20:09:07.104813  260486 default_sa.go:34] waiting for default service account to be created ...
	I1212 20:09:07.107059  260486 default_sa.go:45] found service account: "default"
	I1212 20:09:07.107075  260486 default_sa.go:55] duration metric: took 2.25761ms for default service account to be created ...
	I1212 20:09:07.107083  260486 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 20:09:07.109832  260486 system_pods.go:86] 8 kube-system pods found
	I1212 20:09:07.109862  260486 system_pods.go:89] "coredns-5dd5756b68-shgbw" [2a42f31d-a757-492d-bd0f-539953154a92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 20:09:07.109868  260486 system_pods.go:89] "etcd-old-k8s-version-824670" [e3c6e799-4dac-4c0c-8063-2574684473bd] Running
	I1212 20:09:07.109874  260486 system_pods.go:89] "kindnet-75qr9" [16750e71-744f-4d14-9c72-513a0ef89bd9] Running
	I1212 20:09:07.109880  260486 system_pods.go:89] "kube-apiserver-old-k8s-version-824670" [d744d324-f28f-4417-bd24-10f31d44d033] Running
	I1212 20:09:07.109888  260486 system_pods.go:89] "kube-controller-manager-old-k8s-version-824670" [a546cec2-5f43-4c0a-b310-07fa485e55c4] Running
	I1212 20:09:07.109895  260486 system_pods.go:89] "kube-proxy-nwrgl" [500e6acc-e453-4e40-81df-5d6db1f0f764] Running
	I1212 20:09:07.109900  260486 system_pods.go:89] "kube-scheduler-old-k8s-version-824670" [87d76929-c951-4faf-8216-7c61d544cadb] Running
	I1212 20:09:07.109909  260486 system_pods.go:89] "storage-provisioner" [c9aec911-e8c8-4ff9-8e8c-2d5e27b5812e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 20:09:07.109931  260486 retry.go:31] will retry after 218.897242ms: missing components: kube-dns
	I1212 20:09:07.332507  260486 system_pods.go:86] 8 kube-system pods found
	I1212 20:09:07.332531  260486 system_pods.go:89] "coredns-5dd5756b68-shgbw" [2a42f31d-a757-492d-bd0f-539953154a92] Running
	I1212 20:09:07.332536  260486 system_pods.go:89] "etcd-old-k8s-version-824670" [e3c6e799-4dac-4c0c-8063-2574684473bd] Running
	I1212 20:09:07.332540  260486 system_pods.go:89] "kindnet-75qr9" [16750e71-744f-4d14-9c72-513a0ef89bd9] Running
	I1212 20:09:07.332544  260486 system_pods.go:89] "kube-apiserver-old-k8s-version-824670" [d744d324-f28f-4417-bd24-10f31d44d033] Running
	I1212 20:09:07.332548  260486 system_pods.go:89] "kube-controller-manager-old-k8s-version-824670" [a546cec2-5f43-4c0a-b310-07fa485e55c4] Running
	I1212 20:09:07.332551  260486 system_pods.go:89] "kube-proxy-nwrgl" [500e6acc-e453-4e40-81df-5d6db1f0f764] Running
	I1212 20:09:07.332556  260486 system_pods.go:89] "kube-scheduler-old-k8s-version-824670" [87d76929-c951-4faf-8216-7c61d544cadb] Running
	I1212 20:09:07.332561  260486 system_pods.go:89] "storage-provisioner" [c9aec911-e8c8-4ff9-8e8c-2d5e27b5812e] Running
	I1212 20:09:07.332570  260486 system_pods.go:126] duration metric: took 225.480662ms to wait for k8s-apps to be running ...
	I1212 20:09:07.332586  260486 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 20:09:07.332636  260486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:09:07.345473  260486 system_svc.go:56] duration metric: took 12.877831ms WaitForService to wait for kubelet
	I1212 20:09:07.345507  260486 kubeadm.go:587] duration metric: took 13.680064163s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 20:09:07.345532  260486 node_conditions.go:102] verifying NodePressure condition ...
	I1212 20:09:07.347822  260486 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1212 20:09:07.347840  260486 node_conditions.go:123] node cpu capacity is 8
	I1212 20:09:07.347856  260486 node_conditions.go:105] duration metric: took 2.317514ms to run NodePressure ...
	I1212 20:09:07.347871  260486 start.go:242] waiting for startup goroutines ...
	I1212 20:09:07.347884  260486 start.go:247] waiting for cluster config update ...
	I1212 20:09:07.347897  260486 start.go:256] writing updated cluster config ...
	I1212 20:09:07.348148  260486 ssh_runner.go:195] Run: rm -f paused
	I1212 20:09:07.351774  260486 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 20:09:07.355201  260486 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-shgbw" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:07.359003  260486 pod_ready.go:94] pod "coredns-5dd5756b68-shgbw" is "Ready"
	I1212 20:09:07.359018  260486 pod_ready.go:86] duration metric: took 3.794603ms for pod "coredns-5dd5756b68-shgbw" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:07.361266  260486 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-824670" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:07.364828  260486 pod_ready.go:94] pod "etcd-old-k8s-version-824670" is "Ready"
	I1212 20:09:07.364845  260486 pod_ready.go:86] duration metric: took 3.545245ms for pod "etcd-old-k8s-version-824670" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:07.367102  260486 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-824670" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:07.370556  260486 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-824670" is "Ready"
	I1212 20:09:07.370572  260486 pod_ready.go:86] duration metric: took 3.454086ms for pod "kube-apiserver-old-k8s-version-824670" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:07.372722  260486 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-824670" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:07.756094  260486 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-824670" is "Ready"
	I1212 20:09:07.756119  260486 pod_ready.go:86] duration metric: took 383.382536ms for pod "kube-controller-manager-old-k8s-version-824670" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:07.956908  260486 pod_ready.go:83] waiting for pod "kube-proxy-nwrgl" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:08.355318  260486 pod_ready.go:94] pod "kube-proxy-nwrgl" is "Ready"
	I1212 20:09:08.355340  260486 pod_ready.go:86] duration metric: took 398.410194ms for pod "kube-proxy-nwrgl" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:08.556646  260486 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-824670" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:08.956352  260486 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-824670" is "Ready"
	I1212 20:09:08.956380  260486 pod_ready.go:86] duration metric: took 399.711158ms for pod "kube-scheduler-old-k8s-version-824670" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:08.956397  260486 pod_ready.go:40] duration metric: took 1.604599301s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 20:09:09.007023  260486 start.go:625] kubectl: 1.34.3, cluster: 1.28.0 (minor skew: 6)
	I1212 20:09:09.008435  260486 out.go:203] 
	W1212 20:09:09.009546  260486 out.go:285] ! /usr/local/bin/kubectl is version 1.34.3, which may have incompatibilities with Kubernetes 1.28.0.
	I1212 20:09:09.010578  260486 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1212 20:09:09.012097  260486 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-824670" cluster and "default" namespace by default
	I1212 20:09:06.335479  245478 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 20:09:06.335842  245478 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1212 20:09:06.335889  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:09:06.335935  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:09:06.361020  245478 cri.go:89] found id: "ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:09:06.361037  245478 cri.go:89] found id: ""
	I1212 20:09:06.361045  245478 logs.go:282] 1 containers: [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998]
	I1212 20:09:06.361102  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:09:06.364916  245478 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:09:06.364979  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:09:06.390399  245478 cri.go:89] found id: ""
	I1212 20:09:06.390422  245478 logs.go:282] 0 containers: []
	W1212 20:09:06.390428  245478 logs.go:284] No container was found matching "etcd"
	I1212 20:09:06.390434  245478 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:09:06.390478  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:09:06.415074  245478 cri.go:89] found id: ""
	I1212 20:09:06.415099  245478 logs.go:282] 0 containers: []
	W1212 20:09:06.415108  245478 logs.go:284] No container was found matching "coredns"
	I1212 20:09:06.415114  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:09:06.415153  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:09:06.440338  245478 cri.go:89] found id: "02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:09:06.440354  245478 cri.go:89] found id: ""
	I1212 20:09:06.440361  245478 logs.go:282] 1 containers: [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849]
	I1212 20:09:06.440408  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:09:06.443937  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:09:06.443994  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:09:06.469244  245478 cri.go:89] found id: ""
	I1212 20:09:06.469282  245478 logs.go:282] 0 containers: []
	W1212 20:09:06.469294  245478 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:06.469302  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:09:06.469354  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:09:06.494742  245478 cri.go:89] found id: "4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:09:06.494766  245478 cri.go:89] found id: ""
	I1212 20:09:06.494776  245478 logs.go:282] 1 containers: [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5]
	I1212 20:09:06.494827  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:09:06.498685  245478 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:09:06.498752  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:09:06.524955  245478 cri.go:89] found id: ""
	I1212 20:09:06.524980  245478 logs.go:282] 0 containers: []
	W1212 20:09:06.524990  245478 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:06.524999  245478 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:09:06.525056  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:09:06.550840  245478 cri.go:89] found id: ""
	I1212 20:09:06.550862  245478 logs.go:282] 0 containers: []
	W1212 20:09:06.550869  245478 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:09:06.550878  245478 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:06.550891  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:06.565171  245478 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:06.565196  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:06.622946  245478 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:06.622970  245478 logs.go:123] Gathering logs for kube-apiserver [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998] ...
	I1212 20:09:06.622990  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:09:06.657325  245478 logs.go:123] Gathering logs for kube-scheduler [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849] ...
	I1212 20:09:06.657352  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:09:06.682891  245478 logs.go:123] Gathering logs for kube-controller-manager [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5] ...
	I1212 20:09:06.682917  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:09:06.708585  245478 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:09:06.708609  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:09:06.760549  245478 logs.go:123] Gathering logs for container status ...
	I1212 20:09:06.760574  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:06.788545  245478 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:06.788574  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1212 20:09:06.920591  265161 node_ready.go:57] node "no-preload-753103" has "Ready":"False" status (will retry)
	W1212 20:09:09.420880  265161 node_ready.go:57] node "no-preload-753103" has "Ready":"False" status (will retry)
	I1212 20:09:08.573589  244825 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 20:09:08.573949  244825 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 20:09:08.573998  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:09:08.574041  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:09:08.608590  244825 cri.go:89] found id: "ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:09:08.608608  244825 cri.go:89] found id: ""
	I1212 20:09:08.608620  244825 logs.go:282] 1 containers: [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c]
	I1212 20:09:08.608663  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:09:08.612214  244825 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:09:08.612263  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:09:08.644786  244825 cri.go:89] found id: ""
	I1212 20:09:08.644807  244825 logs.go:282] 0 containers: []
	W1212 20:09:08.644815  244825 logs.go:284] No container was found matching "etcd"
	I1212 20:09:08.644820  244825 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:09:08.644860  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:09:08.677788  244825 cri.go:89] found id: ""
	I1212 20:09:08.677806  244825 logs.go:282] 0 containers: []
	W1212 20:09:08.677813  244825 logs.go:284] No container was found matching "coredns"
	I1212 20:09:08.677829  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:09:08.677881  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:09:08.710126  244825 cri.go:89] found id: "8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:09:08.710151  244825 cri.go:89] found id: ""
	I1212 20:09:08.710161  244825 logs.go:282] 1 containers: [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08]
	I1212 20:09:08.710215  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:09:08.713669  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:09:08.713724  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:09:08.746264  244825 cri.go:89] found id: ""
	I1212 20:09:08.746301  244825 logs.go:282] 0 containers: []
	W1212 20:09:08.746311  244825 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:08.746317  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:09:08.746360  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:09:08.779961  244825 cri.go:89] found id: "509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:09:08.779981  244825 cri.go:89] found id: ""
	I1212 20:09:08.779989  244825 logs.go:282] 1 containers: [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56]
	I1212 20:09:08.780031  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:09:08.783576  244825 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:09:08.783628  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:09:08.815490  244825 cri.go:89] found id: ""
	I1212 20:09:08.815508  244825 logs.go:282] 0 containers: []
	W1212 20:09:08.815515  244825 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:08.815522  244825 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:09:08.815574  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:09:08.848479  244825 cri.go:89] found id: ""
	I1212 20:09:08.848499  244825 logs.go:282] 0 containers: []
	W1212 20:09:08.848506  244825 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:09:08.848514  244825 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:08.848525  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:08.905468  244825 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:08.905488  244825 logs.go:123] Gathering logs for kube-apiserver [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c] ...
	I1212 20:09:08.905500  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:09:08.942181  244825 logs.go:123] Gathering logs for kube-scheduler [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08] ...
	I1212 20:09:08.942204  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:09:09.020878  244825 logs.go:123] Gathering logs for kube-controller-manager [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56] ...
	I1212 20:09:09.020908  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:09:09.065058  244825 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:09:09.065083  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:09:09.114484  244825 logs.go:123] Gathering logs for container status ...
	I1212 20:09:09.114512  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:09.154042  244825 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:09.154070  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:09.249742  244825 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:09.249771  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:11.767020  244825 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 20:09:09.373667  245478 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 20:09:09.374044  245478 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1212 20:09:09.374095  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:09:09.374148  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:09:09.400018  245478 cri.go:89] found id: "ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:09:09.400039  245478 cri.go:89] found id: ""
	I1212 20:09:09.400047  245478 logs.go:282] 1 containers: [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998]
	I1212 20:09:09.400087  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:09:09.403828  245478 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:09:09.403877  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:09:09.429257  245478 cri.go:89] found id: ""
	I1212 20:09:09.429286  245478 logs.go:282] 0 containers: []
	W1212 20:09:09.429297  245478 logs.go:284] No container was found matching "etcd"
	I1212 20:09:09.429304  245478 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:09:09.429362  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:09:09.454646  245478 cri.go:89] found id: ""
	I1212 20:09:09.454667  245478 logs.go:282] 0 containers: []
	W1212 20:09:09.454676  245478 logs.go:284] No container was found matching "coredns"
	I1212 20:09:09.454689  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:09:09.454741  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:09:09.479854  245478 cri.go:89] found id: "02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:09:09.479874  245478 cri.go:89] found id: ""
	I1212 20:09:09.479884  245478 logs.go:282] 1 containers: [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849]
	I1212 20:09:09.479946  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:09:09.483922  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:09:09.483977  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:09:09.512707  245478 cri.go:89] found id: ""
	I1212 20:09:09.512731  245478 logs.go:282] 0 containers: []
	W1212 20:09:09.512742  245478 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:09.512751  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:09:09.512806  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:09:09.538698  245478 cri.go:89] found id: "4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:09:09.538723  245478 cri.go:89] found id: ""
	I1212 20:09:09.538733  245478 logs.go:282] 1 containers: [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5]
	I1212 20:09:09.538778  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:09:09.542727  245478 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:09:09.542799  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:09:09.567315  245478 cri.go:89] found id: ""
	I1212 20:09:09.567337  245478 logs.go:282] 0 containers: []
	W1212 20:09:09.567348  245478 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:09.567355  245478 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:09:09.567410  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:09:09.591384  245478 cri.go:89] found id: ""
	I1212 20:09:09.591409  245478 logs.go:282] 0 containers: []
	W1212 20:09:09.591418  245478 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:09:09.591427  245478 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:09.591436  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:09.644722  245478 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:09.644741  245478 logs.go:123] Gathering logs for kube-apiserver [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998] ...
	I1212 20:09:09.644757  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:09:09.672822  245478 logs.go:123] Gathering logs for kube-scheduler [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849] ...
	I1212 20:09:09.672846  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:09:09.696229  245478 logs.go:123] Gathering logs for kube-controller-manager [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5] ...
	I1212 20:09:09.696250  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:09:09.720906  245478 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:09:09.720928  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:09:09.775396  245478 logs.go:123] Gathering logs for container status ...
	I1212 20:09:09.775419  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:09.804131  245478 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:09.804151  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:09.886660  245478 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:09.886686  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:12.401525  245478 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W1212 20:09:11.421115  265161 node_ready.go:57] node "no-preload-753103" has "Ready":"False" status (will retry)
	I1212 20:09:13.420801  265161 node_ready.go:49] node "no-preload-753103" is "Ready"
	I1212 20:09:13.420826  265161 node_ready.go:38] duration metric: took 13.003141419s for node "no-preload-753103" to be "Ready" ...
	I1212 20:09:13.420842  265161 api_server.go:52] waiting for apiserver process to appear ...
	I1212 20:09:13.420896  265161 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:13.432723  265161 api_server.go:72] duration metric: took 13.312820705s to wait for apiserver process to appear ...
	I1212 20:09:13.432745  265161 api_server.go:88] waiting for apiserver healthz status ...
	I1212 20:09:13.432762  265161 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 20:09:13.438395  265161 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1212 20:09:13.439394  265161 api_server.go:141] control plane version: v1.35.0-beta.0
	I1212 20:09:13.439431  265161 api_server.go:131] duration metric: took 6.678569ms to wait for apiserver health ...
	I1212 20:09:13.439442  265161 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 20:09:13.442831  265161 system_pods.go:59] 8 kube-system pods found
	I1212 20:09:13.442865  265161 system_pods.go:61] "coredns-7d764666f9-pbqw6" [d3962c56-5385-4b85-b38e-85af8a8ac8ef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 20:09:13.442873  265161 system_pods.go:61] "etcd-no-preload-753103" [9e43fd30-82c9-4ff4-a7af-e7a3853c2fc0] Running
	I1212 20:09:13.442900  265161 system_pods.go:61] "kindnet-p4b57" [cde1edf5-2032-4960-96aa-39781736a4c4] Running
	I1212 20:09:13.442909  265161 system_pods.go:61] "kube-apiserver-no-preload-753103" [7a5f7400-b1bb-4114-9086-44e2467aa1c5] Running
	I1212 20:09:13.442916  265161 system_pods.go:61] "kube-controller-manager-no-preload-753103" [47c59f91-8737-462a-9db7-7c3cca251be8] Running
	I1212 20:09:13.442921  265161 system_pods.go:61] "kube-proxy-xn425" [e9aeda8a-4980-4713-aeaf-72c392c221c8] Running
	I1212 20:09:13.442934  265161 system_pods.go:61] "kube-scheduler-no-preload-753103" [0ed98851-8887-4f23-88dc-f51c8431a83c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 20:09:13.442942  265161 system_pods.go:61] "storage-provisioner" [e682308a-054b-4838-85fd-f5925e146ee3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 20:09:13.442952  265161 system_pods.go:74] duration metric: took 3.503011ms to wait for pod list to return data ...
	I1212 20:09:13.442964  265161 default_sa.go:34] waiting for default service account to be created ...
	I1212 20:09:13.445920  265161 default_sa.go:45] found service account: "default"
	I1212 20:09:13.445942  265161 default_sa.go:55] duration metric: took 2.971793ms for default service account to be created ...
	I1212 20:09:13.445952  265161 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 20:09:13.449153  265161 system_pods.go:86] 8 kube-system pods found
	I1212 20:09:13.449182  265161 system_pods.go:89] "coredns-7d764666f9-pbqw6" [d3962c56-5385-4b85-b38e-85af8a8ac8ef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 20:09:13.449190  265161 system_pods.go:89] "etcd-no-preload-753103" [9e43fd30-82c9-4ff4-a7af-e7a3853c2fc0] Running
	I1212 20:09:13.449202  265161 system_pods.go:89] "kindnet-p4b57" [cde1edf5-2032-4960-96aa-39781736a4c4] Running
	I1212 20:09:13.449208  265161 system_pods.go:89] "kube-apiserver-no-preload-753103" [7a5f7400-b1bb-4114-9086-44e2467aa1c5] Running
	I1212 20:09:13.449217  265161 system_pods.go:89] "kube-controller-manager-no-preload-753103" [47c59f91-8737-462a-9db7-7c3cca251be8] Running
	I1212 20:09:13.449222  265161 system_pods.go:89] "kube-proxy-xn425" [e9aeda8a-4980-4713-aeaf-72c392c221c8] Running
	I1212 20:09:13.449233  265161 system_pods.go:89] "kube-scheduler-no-preload-753103" [0ed98851-8887-4f23-88dc-f51c8431a83c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 20:09:13.449238  265161 system_pods.go:89] "storage-provisioner" [e682308a-054b-4838-85fd-f5925e146ee3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 20:09:13.449293  265161 retry.go:31] will retry after 228.908933ms: missing components: kube-dns
	I1212 20:09:13.682039  265161 system_pods.go:86] 8 kube-system pods found
	I1212 20:09:13.682066  265161 system_pods.go:89] "coredns-7d764666f9-pbqw6" [d3962c56-5385-4b85-b38e-85af8a8ac8ef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 20:09:13.682072  265161 system_pods.go:89] "etcd-no-preload-753103" [9e43fd30-82c9-4ff4-a7af-e7a3853c2fc0] Running
	I1212 20:09:13.682079  265161 system_pods.go:89] "kindnet-p4b57" [cde1edf5-2032-4960-96aa-39781736a4c4] Running
	I1212 20:09:13.682082  265161 system_pods.go:89] "kube-apiserver-no-preload-753103" [7a5f7400-b1bb-4114-9086-44e2467aa1c5] Running
	I1212 20:09:13.682088  265161 system_pods.go:89] "kube-controller-manager-no-preload-753103" [47c59f91-8737-462a-9db7-7c3cca251be8] Running
	I1212 20:09:13.682094  265161 system_pods.go:89] "kube-proxy-xn425" [e9aeda8a-4980-4713-aeaf-72c392c221c8] Running
	I1212 20:09:13.682102  265161 system_pods.go:89] "kube-scheduler-no-preload-753103" [0ed98851-8887-4f23-88dc-f51c8431a83c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 20:09:13.682107  265161 system_pods.go:89] "storage-provisioner" [e682308a-054b-4838-85fd-f5925e146ee3] Running
	I1212 20:09:13.682126  265161 retry.go:31] will retry after 381.228296ms: missing components: kube-dns
	I1212 20:09:14.066919  265161 system_pods.go:86] 8 kube-system pods found
	I1212 20:09:14.066948  265161 system_pods.go:89] "coredns-7d764666f9-pbqw6" [d3962c56-5385-4b85-b38e-85af8a8ac8ef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 20:09:14.066953  265161 system_pods.go:89] "etcd-no-preload-753103" [9e43fd30-82c9-4ff4-a7af-e7a3853c2fc0] Running
	I1212 20:09:14.066959  265161 system_pods.go:89] "kindnet-p4b57" [cde1edf5-2032-4960-96aa-39781736a4c4] Running
	I1212 20:09:14.066962  265161 system_pods.go:89] "kube-apiserver-no-preload-753103" [7a5f7400-b1bb-4114-9086-44e2467aa1c5] Running
	I1212 20:09:14.066971  265161 system_pods.go:89] "kube-controller-manager-no-preload-753103" [47c59f91-8737-462a-9db7-7c3cca251be8] Running
	I1212 20:09:14.066976  265161 system_pods.go:89] "kube-proxy-xn425" [e9aeda8a-4980-4713-aeaf-72c392c221c8] Running
	I1212 20:09:14.066983  265161 system_pods.go:89] "kube-scheduler-no-preload-753103" [0ed98851-8887-4f23-88dc-f51c8431a83c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 20:09:14.066990  265161 system_pods.go:89] "storage-provisioner" [e682308a-054b-4838-85fd-f5925e146ee3] Running
	I1212 20:09:14.067009  265161 retry.go:31] will retry after 488.244704ms: missing components: kube-dns
	I1212 20:09:14.557983  265161 system_pods.go:86] 8 kube-system pods found
	I1212 20:09:14.558010  265161 system_pods.go:89] "coredns-7d764666f9-pbqw6" [d3962c56-5385-4b85-b38e-85af8a8ac8ef] Running
	I1212 20:09:14.558015  265161 system_pods.go:89] "etcd-no-preload-753103" [9e43fd30-82c9-4ff4-a7af-e7a3853c2fc0] Running
	I1212 20:09:14.558020  265161 system_pods.go:89] "kindnet-p4b57" [cde1edf5-2032-4960-96aa-39781736a4c4] Running
	I1212 20:09:14.558023  265161 system_pods.go:89] "kube-apiserver-no-preload-753103" [7a5f7400-b1bb-4114-9086-44e2467aa1c5] Running
	I1212 20:09:14.558029  265161 system_pods.go:89] "kube-controller-manager-no-preload-753103" [47c59f91-8737-462a-9db7-7c3cca251be8] Running
	I1212 20:09:14.558034  265161 system_pods.go:89] "kube-proxy-xn425" [e9aeda8a-4980-4713-aeaf-72c392c221c8] Running
	I1212 20:09:14.558046  265161 system_pods.go:89] "kube-scheduler-no-preload-753103" [0ed98851-8887-4f23-88dc-f51c8431a83c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 20:09:14.558057  265161 system_pods.go:89] "storage-provisioner" [e682308a-054b-4838-85fd-f5925e146ee3] Running
	I1212 20:09:14.558068  265161 system_pods.go:126] duration metric: took 1.112109785s to wait for k8s-apps to be running ...
	I1212 20:09:14.558080  265161 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 20:09:14.558119  265161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:09:14.570464  265161 system_svc.go:56] duration metric: took 12.375782ms WaitForService to wait for kubelet
	I1212 20:09:14.570488  265161 kubeadm.go:587] duration metric: took 14.450590539s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 20:09:14.570505  265161 node_conditions.go:102] verifying NodePressure condition ...
	I1212 20:09:14.572539  265161 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1212 20:09:14.572561  265161 node_conditions.go:123] node cpu capacity is 8
	I1212 20:09:14.572577  265161 node_conditions.go:105] duration metric: took 2.065626ms to run NodePressure ...
	I1212 20:09:14.572590  265161 start.go:242] waiting for startup goroutines ...
	I1212 20:09:14.572603  265161 start.go:247] waiting for cluster config update ...
	I1212 20:09:14.572621  265161 start.go:256] writing updated cluster config ...
	I1212 20:09:14.572868  265161 ssh_runner.go:195] Run: rm -f paused
	I1212 20:09:14.576398  265161 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 20:09:14.579043  265161 pod_ready.go:83] waiting for pod "coredns-7d764666f9-pbqw6" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:14.582351  265161 pod_ready.go:94] pod "coredns-7d764666f9-pbqw6" is "Ready"
	I1212 20:09:14.582370  265161 pod_ready.go:86] duration metric: took 3.309431ms for pod "coredns-7d764666f9-pbqw6" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:14.583980  265161 pod_ready.go:83] waiting for pod "etcd-no-preload-753103" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:14.587324  265161 pod_ready.go:94] pod "etcd-no-preload-753103" is "Ready"
	I1212 20:09:14.587342  265161 pod_ready.go:86] duration metric: took 3.345068ms for pod "etcd-no-preload-753103" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:14.590996  265161 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-753103" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:14.594241  265161 pod_ready.go:94] pod "kube-apiserver-no-preload-753103" is "Ready"
	I1212 20:09:14.594261  265161 pod_ready.go:86] duration metric: took 3.248013ms for pod "kube-apiserver-no-preload-753103" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:14.595945  265161 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-753103" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:14.979814  265161 pod_ready.go:94] pod "kube-controller-manager-no-preload-753103" is "Ready"
	I1212 20:09:14.979844  265161 pod_ready.go:86] duration metric: took 383.881079ms for pod "kube-controller-manager-no-preload-753103" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:15.181064  265161 pod_ready.go:83] waiting for pod "kube-proxy-xn425" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:15.580629  265161 pod_ready.go:94] pod "kube-proxy-xn425" is "Ready"
	I1212 20:09:15.580651  265161 pod_ready.go:86] duration metric: took 399.55808ms for pod "kube-proxy-xn425" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:15.780735  265161 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-753103" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:16.180820  265161 pod_ready.go:94] pod "kube-scheduler-no-preload-753103" is "Ready"
	I1212 20:09:16.180849  265161 pod_ready.go:86] duration metric: took 400.091787ms for pod "kube-scheduler-no-preload-753103" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:16.180865  265161 pod_ready.go:40] duration metric: took 1.604438666s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 20:09:16.226513  265161 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1212 20:09:16.230096  265161 out.go:179] * Done! kubectl is now configured to use "no-preload-753103" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 12 20:09:06 old-k8s-version-824670 crio[784]: time="2025-12-12T20:09:06.978151418Z" level=info msg="Starting container: 8f1e77025acca4453b80365f1026991e9e51b01a82021af206f36fd9df4068dc" id=1c74dbc6-218f-4f19-b363-1bc804b9c24d name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 20:09:06 old-k8s-version-824670 crio[784]: time="2025-12-12T20:09:06.979879069Z" level=info msg="Started container" PID=2142 containerID=8f1e77025acca4453b80365f1026991e9e51b01a82021af206f36fd9df4068dc description=kube-system/coredns-5dd5756b68-shgbw/coredns id=1c74dbc6-218f-4f19-b363-1bc804b9c24d name=/runtime.v1.RuntimeService/StartContainer sandboxID=8f9bb30b0bb81ebad8fb53c40ec5ad6f2d70c6804c53594caa2c13fc7cfb730b
	Dec 12 20:09:09 old-k8s-version-824670 crio[784]: time="2025-12-12T20:09:09.48257661Z" level=info msg="Running pod sandbox: default/busybox/POD" id=7a693861-52af-4c8f-8eb0-3661249dafcc name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 20:09:09 old-k8s-version-824670 crio[784]: time="2025-12-12T20:09:09.482659011Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:09:09 old-k8s-version-824670 crio[784]: time="2025-12-12T20:09:09.487321037Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:2b9f36ffef444886bd52d6b51d2ae5f9e1618cd51229c50164fc0268921a09e5 UID:6c5ea4b4-8ab0-4bd9-ac11-07892e94a6d2 NetNS:/var/run/netns/a1c62e82-aa10-4c4f-a313-f01ccf27fae9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000f16a98}] Aliases:map[]}"
	Dec 12 20:09:09 old-k8s-version-824670 crio[784]: time="2025-12-12T20:09:09.487357901Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 12 20:09:09 old-k8s-version-824670 crio[784]: time="2025-12-12T20:09:09.496445015Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:2b9f36ffef444886bd52d6b51d2ae5f9e1618cd51229c50164fc0268921a09e5 UID:6c5ea4b4-8ab0-4bd9-ac11-07892e94a6d2 NetNS:/var/run/netns/a1c62e82-aa10-4c4f-a313-f01ccf27fae9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000f16a98}] Aliases:map[]}"
	Dec 12 20:09:09 old-k8s-version-824670 crio[784]: time="2025-12-12T20:09:09.496584205Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 12 20:09:09 old-k8s-version-824670 crio[784]: time="2025-12-12T20:09:09.497391269Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 12 20:09:09 old-k8s-version-824670 crio[784]: time="2025-12-12T20:09:09.498587753Z" level=info msg="Ran pod sandbox 2b9f36ffef444886bd52d6b51d2ae5f9e1618cd51229c50164fc0268921a09e5 with infra container: default/busybox/POD" id=7a693861-52af-4c8f-8eb0-3661249dafcc name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 20:09:09 old-k8s-version-824670 crio[784]: time="2025-12-12T20:09:09.499811843Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=cd241c41-cead-4551-9431-08439ac014f6 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:09:09 old-k8s-version-824670 crio[784]: time="2025-12-12T20:09:09.499939445Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=cd241c41-cead-4551-9431-08439ac014f6 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:09:09 old-k8s-version-824670 crio[784]: time="2025-12-12T20:09:09.499984491Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=cd241c41-cead-4551-9431-08439ac014f6 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:09:09 old-k8s-version-824670 crio[784]: time="2025-12-12T20:09:09.500528019Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=648f5896-c361-411d-b272-e65bb6685d6c name=/runtime.v1.ImageService/PullImage
	Dec 12 20:09:09 old-k8s-version-824670 crio[784]: time="2025-12-12T20:09:09.50190894Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 12 20:09:10 old-k8s-version-824670 crio[784]: time="2025-12-12T20:09:10.136508454Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=648f5896-c361-411d-b272-e65bb6685d6c name=/runtime.v1.ImageService/PullImage
	Dec 12 20:09:10 old-k8s-version-824670 crio[784]: time="2025-12-12T20:09:10.137169313Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=389f24b2-ac76-4861-9e27-0ebbfa0765ad name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:09:10 old-k8s-version-824670 crio[784]: time="2025-12-12T20:09:10.138555703Z" level=info msg="Creating container: default/busybox/busybox" id=e8b24464-be86-416c-818d-f2dd0cd5ba0a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:09:10 old-k8s-version-824670 crio[784]: time="2025-12-12T20:09:10.138652114Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:09:10 old-k8s-version-824670 crio[784]: time="2025-12-12T20:09:10.142621656Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:09:10 old-k8s-version-824670 crio[784]: time="2025-12-12T20:09:10.143004836Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:09:10 old-k8s-version-824670 crio[784]: time="2025-12-12T20:09:10.170402732Z" level=info msg="Created container 94a65b414038db1cd2fc947d4b9a3b575d33b8e9a5fbc2e11a4f633066242817: default/busybox/busybox" id=e8b24464-be86-416c-818d-f2dd0cd5ba0a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:09:10 old-k8s-version-824670 crio[784]: time="2025-12-12T20:09:10.170965353Z" level=info msg="Starting container: 94a65b414038db1cd2fc947d4b9a3b575d33b8e9a5fbc2e11a4f633066242817" id=d62ba5f1-cc5e-4b7b-b9c4-56e1d1acaed0 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 20:09:10 old-k8s-version-824670 crio[784]: time="2025-12-12T20:09:10.172917796Z" level=info msg="Started container" PID=2221 containerID=94a65b414038db1cd2fc947d4b9a3b575d33b8e9a5fbc2e11a4f633066242817 description=default/busybox/busybox id=d62ba5f1-cc5e-4b7b-b9c4-56e1d1acaed0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2b9f36ffef444886bd52d6b51d2ae5f9e1618cd51229c50164fc0268921a09e5
	Dec 12 20:09:16 old-k8s-version-824670 crio[784]: time="2025-12-12T20:09:16.274532348Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	94a65b414038d       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   2b9f36ffef444       busybox                                          default
	8f1e77025acca       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      10 seconds ago      Running             coredns                   0                   8f9bb30b0bb81       coredns-5dd5756b68-shgbw                         kube-system
	ade0f0604b699       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 seconds ago      Running             storage-provisioner       0                   f663088c13c09       storage-provisioner                              kube-system
	f8baba37f910c       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    21 seconds ago      Running             kindnet-cni               0                   7c4626c8a8602       kindnet-75qr9                                    kube-system
	2d97b04deb672       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      23 seconds ago      Running             kube-proxy                0                   77b69259319b0       kube-proxy-nwrgl                                 kube-system
	66ce6de7e3590       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      42 seconds ago      Running             kube-apiserver            0                   23a0eade3a78b       kube-apiserver-old-k8s-version-824670            kube-system
	ecc7704bd0aa3       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      42 seconds ago      Running             etcd                      0                   0454c1b605996       etcd-old-k8s-version-824670                      kube-system
	cd654b870ea3c       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      42 seconds ago      Running             kube-controller-manager   0                   1a24215a3f62b       kube-controller-manager-old-k8s-version-824670   kube-system
	92437108d9e43       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      42 seconds ago      Running             kube-scheduler            0                   45743c8522cdb       kube-scheduler-old-k8s-version-824670            kube-system
	
	
	==> coredns [8f1e77025acca4453b80365f1026991e9e51b01a82021af206f36fd9df4068dc] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:45552 - 5159 "HINFO IN 1414226486457497401.5501567431862328555. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.074592583s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-824670
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-824670
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300
	                    minikube.k8s.io/name=old-k8s-version-824670
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T20_08_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 20:08:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-824670
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 20:09:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 20:09:11 +0000   Fri, 12 Dec 2025 20:08:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 20:09:11 +0000   Fri, 12 Dec 2025 20:08:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 20:09:11 +0000   Fri, 12 Dec 2025 20:08:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 20:09:11 +0000   Fri, 12 Dec 2025 20:09:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-824670
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c9831a2813dcaeb3cc17b596693b7bac
	  System UUID:                1fb8fe54-c4b9-4491-b301-c9b4220778ba
	  Boot ID:                    50e046d9-33b0-4107-918f-f0a8d9513b10
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 coredns-5dd5756b68-shgbw                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-old-k8s-version-824670                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         36s
	  kube-system                 kindnet-75qr9                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-old-k8s-version-824670             250m (3%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-old-k8s-version-824670    200m (2%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-proxy-nwrgl                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-old-k8s-version-824670             100m (1%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 36s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s   kubelet          Node old-k8s-version-824670 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s   kubelet          Node old-k8s-version-824670 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s   kubelet          Node old-k8s-version-824670 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           24s   node-controller  Node old-k8s-version-824670 event: Registered Node old-k8s-version-824670 in Controller
	  Normal  NodeReady                11s   kubelet          Node old-k8s-version-824670 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.086327] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026088] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.855140] kauditd_printk_skb: 47 callbacks suppressed
	[Dec12 19:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.016395] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023890] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023861] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023912] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023894] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +2.047754] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +4.031589] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +8.448131] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[Dec12 19:32] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[ +32.252714] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	
	
	==> etcd [ecc7704bd0aa39969152b7b076ec97257baca3f908466a26909d5aff1e92d6af] <==
	{"level":"info","ts":"2025-12-12T20:08:35.761789Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 switched to configuration voters=(16125559238023404339)"}
	{"level":"info","ts":"2025-12-12T20:08:35.762591Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"]}
	{"level":"info","ts":"2025-12-12T20:08:35.763795Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-12T20:08:35.76391Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-12-12T20:08:35.763931Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-12-12T20:08:35.76402Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-12T20:08:35.764051Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-12T20:08:36.754423Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-12T20:08:36.75447Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-12T20:08:36.754488Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 1"}
	{"level":"info","ts":"2025-12-12T20:08:36.754502Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 2"}
	{"level":"info","ts":"2025-12-12T20:08:36.75451Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-12-12T20:08:36.754522Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 2"}
	{"level":"info","ts":"2025-12-12T20:08:36.754531Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-12-12T20:08:36.755444Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-12T20:08:36.756004Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-12T20:08:36.755999Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-824670 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-12T20:08:36.756033Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-12T20:08:36.756168Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-12T20:08:36.756188Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-12T20:08:36.756308Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-12T20:08:36.756408Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-12T20:08:36.756441Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-12T20:08:36.758009Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2025-12-12T20:08:36.75837Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 20:09:17 up 51 min,  0 user,  load average: 1.48, 1.72, 1.37
	Linux old-k8s-version-824670 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f8baba37f910c6f27b65ed5f5256934e61af47ba0637be81f1b854e55816a4d9] <==
	I1212 20:08:56.279937       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1212 20:08:56.280231       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1212 20:08:56.280407       1 main.go:148] setting mtu 1500 for CNI 
	I1212 20:08:56.280427       1 main.go:178] kindnetd IP family: "ipv4"
	I1212 20:08:56.280451       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-12T20:08:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1212 20:08:56.574417       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1212 20:08:56.574462       1 controller.go:381] "Waiting for informer caches to sync"
	I1212 20:08:56.574487       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1212 20:08:56.574668       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1212 20:08:56.874754       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1212 20:08:56.874796       1 metrics.go:72] Registering metrics
	I1212 20:08:56.874850       1 controller.go:711] "Syncing nftables rules"
	I1212 20:09:06.574585       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1212 20:09:06.574636       1 main.go:301] handling current node
	I1212 20:09:16.574104       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1212 20:09:16.574142       1 main.go:301] handling current node
	
	
	==> kube-apiserver [66ce6de7e3590bf264881b2311b54b8eb6dac2d5168f687d4d43435c080e3f4a] <==
	I1212 20:08:37.868485       1 aggregator.go:166] initial CRD sync complete...
	I1212 20:08:37.868502       1 autoregister_controller.go:141] Starting autoregister controller
	I1212 20:08:37.868511       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 20:08:37.868523       1 cache.go:39] Caches are synced for autoregister controller
	I1212 20:08:37.869086       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1212 20:08:37.870840       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1212 20:08:37.871987       1 controller.go:624] quota admission added evaluator for: namespaces
	I1212 20:08:37.883377       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 20:08:37.885534       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1212 20:08:37.885873       1 shared_informer.go:318] Caches are synced for configmaps
	I1212 20:08:38.773847       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1212 20:08:38.777822       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1212 20:08:38.777840       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 20:08:39.177201       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 20:08:39.212534       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 20:08:39.286324       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1212 20:08:39.292423       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1212 20:08:39.293601       1 controller.go:624] quota admission added evaluator for: endpoints
	I1212 20:08:39.297678       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 20:08:39.828300       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1212 20:08:40.990336       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1212 20:08:40.998880       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1212 20:08:41.007168       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1212 20:08:53.796838       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1212 20:08:53.812518       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [cd654b870ea3cf9324b7f2f8315d633bb7d830df063bd2de509f00d5ae6bdc86] <==
	I1212 20:08:53.828445       1 shared_informer.go:318] Caches are synced for attach detach
	I1212 20:08:53.828639       1 shared_informer.go:318] Caches are synced for endpoint
	I1212 20:08:53.831818       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1212 20:08:53.844875       1 shared_informer.go:318] Caches are synced for persistent volume
	I1212 20:08:53.853979       1 shared_informer.go:318] Caches are synced for PVC protection
	I1212 20:08:53.859702       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-8dg2f"
	I1212 20:08:53.860678       1 shared_informer.go:318] Caches are synced for ephemeral
	I1212 20:08:53.865764       1 shared_informer.go:318] Caches are synced for resource quota
	I1212 20:08:53.889057       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-shgbw"
	I1212 20:08:53.907417       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="79.982437ms"
	I1212 20:08:53.936808       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="29.29336ms"
	I1212 20:08:53.936985       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="105.364µs"
	I1212 20:08:54.103674       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1212 20:08:54.112399       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-8dg2f"
	I1212 20:08:54.122121       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="18.732349ms"
	I1212 20:08:54.127453       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="5.281181ms"
	I1212 20:08:54.127563       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.782µs"
	I1212 20:08:54.179415       1 shared_informer.go:318] Caches are synced for garbage collector
	I1212 20:08:54.179451       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1212 20:08:54.189316       1 shared_informer.go:318] Caches are synced for garbage collector
	I1212 20:09:06.634003       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="101.958µs"
	I1212 20:09:06.646675       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="93.84µs"
	I1212 20:09:07.152754       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.578596ms"
	I1212 20:09:07.152878       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="69.449µs"
	I1212 20:09:08.778776       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [2d97b04deb6728a96e5a50a1dff5c2612144c67f830812980083e507006fbd33] <==
	I1212 20:08:54.264125       1 server_others.go:69] "Using iptables proxy"
	I1212 20:08:54.274631       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1212 20:08:54.296609       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 20:08:54.299447       1 server_others.go:152] "Using iptables Proxier"
	I1212 20:08:54.299479       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1212 20:08:54.299488       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1212 20:08:54.299531       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 20:08:54.299756       1 server.go:846] "Version info" version="v1.28.0"
	I1212 20:08:54.299772       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 20:08:54.300968       1 config.go:188] "Starting service config controller"
	I1212 20:08:54.301014       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 20:08:54.301054       1 config.go:315] "Starting node config controller"
	I1212 20:08:54.301066       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 20:08:54.301134       1 config.go:97] "Starting endpoint slice config controller"
	I1212 20:08:54.301188       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 20:08:54.401671       1 shared_informer.go:318] Caches are synced for node config
	I1212 20:08:54.401707       1 shared_informer.go:318] Caches are synced for service config
	I1212 20:08:54.402872       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [92437108d9e43e014854165ea9c8a33bfd7b6c1306a80b05fd447f324a7be163] <==
	W1212 20:08:37.834161       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1212 20:08:37.834589       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1212 20:08:37.834171       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 20:08:37.834632       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1212 20:08:37.834213       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 20:08:37.834686       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1212 20:08:37.834220       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 20:08:37.834739       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1212 20:08:37.834247       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1212 20:08:37.834765       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1212 20:08:37.834430       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 20:08:37.834788       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1212 20:08:38.704721       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 20:08:38.704770       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1212 20:08:38.748353       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 20:08:38.748393       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1212 20:08:38.754743       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 20:08:38.754773       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1212 20:08:38.841659       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 20:08:38.841697       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1212 20:08:38.904976       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 20:08:38.905036       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1212 20:08:39.202241       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 20:08:39.202296       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1212 20:08:41.818910       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 12 20:08:53 old-k8s-version-824670 kubelet[1398]: I1212 20:08:53.731252    1398 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 12 20:08:53 old-k8s-version-824670 kubelet[1398]: I1212 20:08:53.834024    1398 topology_manager.go:215] "Topology Admit Handler" podUID="16750e71-744f-4d14-9c72-513a0ef89bd9" podNamespace="kube-system" podName="kindnet-75qr9"
	Dec 12 20:08:53 old-k8s-version-824670 kubelet[1398]: I1212 20:08:53.842644    1398 topology_manager.go:215] "Topology Admit Handler" podUID="500e6acc-e453-4e40-81df-5d6db1f0f764" podNamespace="kube-system" podName="kube-proxy-nwrgl"
	Dec 12 20:08:53 old-k8s-version-824670 kubelet[1398]: I1212 20:08:53.941432    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16750e71-744f-4d14-9c72-513a0ef89bd9-xtables-lock\") pod \"kindnet-75qr9\" (UID: \"16750e71-744f-4d14-9c72-513a0ef89bd9\") " pod="kube-system/kindnet-75qr9"
	Dec 12 20:08:53 old-k8s-version-824670 kubelet[1398]: I1212 20:08:53.941693    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/500e6acc-e453-4e40-81df-5d6db1f0f764-kube-proxy\") pod \"kube-proxy-nwrgl\" (UID: \"500e6acc-e453-4e40-81df-5d6db1f0f764\") " pod="kube-system/kube-proxy-nwrgl"
	Dec 12 20:08:53 old-k8s-version-824670 kubelet[1398]: I1212 20:08:53.941728    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/500e6acc-e453-4e40-81df-5d6db1f0f764-xtables-lock\") pod \"kube-proxy-nwrgl\" (UID: \"500e6acc-e453-4e40-81df-5d6db1f0f764\") " pod="kube-system/kube-proxy-nwrgl"
	Dec 12 20:08:53 old-k8s-version-824670 kubelet[1398]: I1212 20:08:53.941756    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/500e6acc-e453-4e40-81df-5d6db1f0f764-lib-modules\") pod \"kube-proxy-nwrgl\" (UID: \"500e6acc-e453-4e40-81df-5d6db1f0f764\") " pod="kube-system/kube-proxy-nwrgl"
	Dec 12 20:08:53 old-k8s-version-824670 kubelet[1398]: I1212 20:08:53.941786    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16750e71-744f-4d14-9c72-513a0ef89bd9-lib-modules\") pod \"kindnet-75qr9\" (UID: \"16750e71-744f-4d14-9c72-513a0ef89bd9\") " pod="kube-system/kindnet-75qr9"
	Dec 12 20:08:53 old-k8s-version-824670 kubelet[1398]: I1212 20:08:53.941839    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w4gp\" (UniqueName: \"kubernetes.io/projected/16750e71-744f-4d14-9c72-513a0ef89bd9-kube-api-access-9w4gp\") pod \"kindnet-75qr9\" (UID: \"16750e71-744f-4d14-9c72-513a0ef89bd9\") " pod="kube-system/kindnet-75qr9"
	Dec 12 20:08:53 old-k8s-version-824670 kubelet[1398]: I1212 20:08:53.941872    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vljkz\" (UniqueName: \"kubernetes.io/projected/500e6acc-e453-4e40-81df-5d6db1f0f764-kube-api-access-vljkz\") pod \"kube-proxy-nwrgl\" (UID: \"500e6acc-e453-4e40-81df-5d6db1f0f764\") " pod="kube-system/kube-proxy-nwrgl"
	Dec 12 20:08:53 old-k8s-version-824670 kubelet[1398]: I1212 20:08:53.941902    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/16750e71-744f-4d14-9c72-513a0ef89bd9-cni-cfg\") pod \"kindnet-75qr9\" (UID: \"16750e71-744f-4d14-9c72-513a0ef89bd9\") " pod="kube-system/kindnet-75qr9"
	Dec 12 20:08:55 old-k8s-version-824670 kubelet[1398]: I1212 20:08:55.119583    1398 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-nwrgl" podStartSLOduration=2.119529791 podCreationTimestamp="2025-12-12 20:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 20:08:55.119068691 +0000 UTC m=+14.152916857" watchObservedRunningTime="2025-12-12 20:08:55.119529791 +0000 UTC m=+14.153377948"
	Dec 12 20:08:57 old-k8s-version-824670 kubelet[1398]: I1212 20:08:57.123037    1398 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-75qr9" podStartSLOduration=2.195834075 podCreationTimestamp="2025-12-12 20:08:53 +0000 UTC" firstStartedPulling="2025-12-12 20:08:54.143239112 +0000 UTC m=+13.177087263" lastFinishedPulling="2025-12-12 20:08:56.070387849 +0000 UTC m=+15.104235994" observedRunningTime="2025-12-12 20:08:57.122742082 +0000 UTC m=+16.156590255" watchObservedRunningTime="2025-12-12 20:08:57.122982806 +0000 UTC m=+16.156830965"
	Dec 12 20:09:06 old-k8s-version-824670 kubelet[1398]: I1212 20:09:06.611214    1398 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 12 20:09:06 old-k8s-version-824670 kubelet[1398]: I1212 20:09:06.634261    1398 topology_manager.go:215] "Topology Admit Handler" podUID="2a42f31d-a757-492d-bd0f-539953154a92" podNamespace="kube-system" podName="coredns-5dd5756b68-shgbw"
	Dec 12 20:09:06 old-k8s-version-824670 kubelet[1398]: I1212 20:09:06.634654    1398 topology_manager.go:215] "Topology Admit Handler" podUID="c9aec911-e8c8-4ff9-8e8c-2d5e27b5812e" podNamespace="kube-system" podName="storage-provisioner"
	Dec 12 20:09:06 old-k8s-version-824670 kubelet[1398]: I1212 20:09:06.731920    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpg72\" (UniqueName: \"kubernetes.io/projected/c9aec911-e8c8-4ff9-8e8c-2d5e27b5812e-kube-api-access-vpg72\") pod \"storage-provisioner\" (UID: \"c9aec911-e8c8-4ff9-8e8c-2d5e27b5812e\") " pod="kube-system/storage-provisioner"
	Dec 12 20:09:06 old-k8s-version-824670 kubelet[1398]: I1212 20:09:06.731979    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a42f31d-a757-492d-bd0f-539953154a92-config-volume\") pod \"coredns-5dd5756b68-shgbw\" (UID: \"2a42f31d-a757-492d-bd0f-539953154a92\") " pod="kube-system/coredns-5dd5756b68-shgbw"
	Dec 12 20:09:06 old-k8s-version-824670 kubelet[1398]: I1212 20:09:06.732010    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c9aec911-e8c8-4ff9-8e8c-2d5e27b5812e-tmp\") pod \"storage-provisioner\" (UID: \"c9aec911-e8c8-4ff9-8e8c-2d5e27b5812e\") " pod="kube-system/storage-provisioner"
	Dec 12 20:09:06 old-k8s-version-824670 kubelet[1398]: I1212 20:09:06.732050    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxsc7\" (UniqueName: \"kubernetes.io/projected/2a42f31d-a757-492d-bd0f-539953154a92-kube-api-access-sxsc7\") pod \"coredns-5dd5756b68-shgbw\" (UID: \"2a42f31d-a757-492d-bd0f-539953154a92\") " pod="kube-system/coredns-5dd5756b68-shgbw"
	Dec 12 20:09:07 old-k8s-version-824670 kubelet[1398]: I1212 20:09:07.137853    1398 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.13780379 podCreationTimestamp="2025-12-12 20:08:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 20:09:07.137543603 +0000 UTC m=+26.171391762" watchObservedRunningTime="2025-12-12 20:09:07.13780379 +0000 UTC m=+26.171651948"
	Dec 12 20:09:07 old-k8s-version-824670 kubelet[1398]: I1212 20:09:07.146170    1398 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-shgbw" podStartSLOduration=14.146129499 podCreationTimestamp="2025-12-12 20:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 20:09:07.146051459 +0000 UTC m=+26.179899617" watchObservedRunningTime="2025-12-12 20:09:07.146129499 +0000 UTC m=+26.179977657"
	Dec 12 20:09:09 old-k8s-version-824670 kubelet[1398]: I1212 20:09:09.180664    1398 topology_manager.go:215] "Topology Admit Handler" podUID="6c5ea4b4-8ab0-4bd9-ac11-07892e94a6d2" podNamespace="default" podName="busybox"
	Dec 12 20:09:09 old-k8s-version-824670 kubelet[1398]: I1212 20:09:09.247262    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd68j\" (UniqueName: \"kubernetes.io/projected/6c5ea4b4-8ab0-4bd9-ac11-07892e94a6d2-kube-api-access-gd68j\") pod \"busybox\" (UID: \"6c5ea4b4-8ab0-4bd9-ac11-07892e94a6d2\") " pod="default/busybox"
	Dec 12 20:09:11 old-k8s-version-824670 kubelet[1398]: I1212 20:09:11.149072    1398 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.5124704709999999 podCreationTimestamp="2025-12-12 20:09:09 +0000 UTC" firstStartedPulling="2025-12-12 20:09:09.500170818 +0000 UTC m=+28.534018967" lastFinishedPulling="2025-12-12 20:09:10.136721343 +0000 UTC m=+29.170569483" observedRunningTime="2025-12-12 20:09:11.148593719 +0000 UTC m=+30.182441877" watchObservedRunningTime="2025-12-12 20:09:11.149020987 +0000 UTC m=+30.182869145"
	
	
	==> storage-provisioner [ade0f0604b69910ebd1354454b9d967ea6c5fc5cfe7869a721df7172109c60a0] <==
	I1212 20:09:06.987539       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 20:09:06.995892       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 20:09:06.995952       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 20:09:07.003560       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 20:09:07.003613       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f695fbd6-0ef5-496c-8640-6e2ff454cd84", APIVersion:"v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-824670_49854629-fa59-4366-93bb-dbc2f186732e became leader
	I1212 20:09:07.003712       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-824670_49854629-fa59-4366-93bb-dbc2f186732e!
	I1212 20:09:07.103938       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-824670_49854629-fa59-4366-93bb-dbc2f186732e!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-824670 -n old-k8s-version-824670
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-824670 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-753103 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-753103 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (230.158013ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:09:24Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-753103 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-753103 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-753103 describe deploy/metrics-server -n kube-system: exit status 1 (55.364221ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-753103 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-753103
helpers_test.go:244: (dbg) docker inspect no-preload-753103:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "452e89832e40aed0dcc87fb5c8d33b609854c2becd1774e2a09d3c6d345e07dd",
	        "Created": "2025-12-12T20:08:31.941720816Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 265780,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T20:08:31.975330893Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ca69fb46d667873e297ff8975852d6be818eb624529e750e2b09ff0d3b0c367",
	        "ResolvConfPath": "/var/lib/docker/containers/452e89832e40aed0dcc87fb5c8d33b609854c2becd1774e2a09d3c6d345e07dd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/452e89832e40aed0dcc87fb5c8d33b609854c2becd1774e2a09d3c6d345e07dd/hostname",
	        "HostsPath": "/var/lib/docker/containers/452e89832e40aed0dcc87fb5c8d33b609854c2becd1774e2a09d3c6d345e07dd/hosts",
	        "LogPath": "/var/lib/docker/containers/452e89832e40aed0dcc87fb5c8d33b609854c2becd1774e2a09d3c6d345e07dd/452e89832e40aed0dcc87fb5c8d33b609854c2becd1774e2a09d3c6d345e07dd-json.log",
	        "Name": "/no-preload-753103",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-753103:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-753103",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "452e89832e40aed0dcc87fb5c8d33b609854c2becd1774e2a09d3c6d345e07dd",
	                "LowerDir": "/var/lib/docker/overlay2/520211cd2383e798b47ab216c7c60903d51535a3971d5244a2d0383f153e65e5-init/diff:/var/lib/docker/overlay2/87d8f1d6be832394c20ec55eb084b3f4a084ca786856584bd9e90cdb45432d3c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/520211cd2383e798b47ab216c7c60903d51535a3971d5244a2d0383f153e65e5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/520211cd2383e798b47ab216c7c60903d51535a3971d5244a2d0383f153e65e5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/520211cd2383e798b47ab216c7c60903d51535a3971d5244a2d0383f153e65e5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-753103",
	                "Source": "/var/lib/docker/volumes/no-preload-753103/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-753103",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-753103",
	                "name.minikube.sigs.k8s.io": "no-preload-753103",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "7c773539de5e036f0433493b465810548621c3cefcc1313e740e96b370b4fd74",
	            "SandboxKey": "/var/run/docker/netns/7c773539de5e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-753103": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c5f00e9d4498c5a5c29031e27e31d73fb062b781edd69002a7dac693e0d7a335",
	                    "EndpointID": "5d1776d41c303cfd55c9bcbacb36dad2f7cd2513a1a82554abb335228c027c39",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "22:ff:7f:11:a2:d0",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-753103",
	                        "452e89832e40"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-753103 -n no-preload-753103
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-753103 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-753103 logs -n 25: (1.004953488s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p NoKubernetes-562130 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-562130       │ jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │                     │
	│ delete  │ -p NoKubernetes-562130                                                                                                                                                                                                                        │ NoKubernetes-562130       │ jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:04 UTC │
	│ start   │ -p cert-expiration-070436 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-070436    │ jenkins │ v1.37.0 │ 12 Dec 25 20:04 UTC │ 12 Dec 25 20:05 UTC │
	│ delete  │ -p force-systemd-env-361023                                                                                                                                                                                                                   │ force-systemd-env-361023  │ jenkins │ v1.37.0 │ 12 Dec 25 20:05 UTC │ 12 Dec 25 20:05 UTC │
	│ start   │ -p cert-options-427408 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-427408       │ jenkins │ v1.37.0 │ 12 Dec 25 20:05 UTC │ 12 Dec 25 20:05 UTC │
	│ start   │ -p pause-243084 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                                              │ pause-243084              │ jenkins │ v1.37.0 │ 12 Dec 25 20:05 UTC │ 12 Dec 25 20:05 UTC │
	│ ssh     │ cert-options-427408 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-427408       │ jenkins │ v1.37.0 │ 12 Dec 25 20:05 UTC │ 12 Dec 25 20:05 UTC │
	│ ssh     │ -p cert-options-427408 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-427408       │ jenkins │ v1.37.0 │ 12 Dec 25 20:05 UTC │ 12 Dec 25 20:05 UTC │
	│ delete  │ -p cert-options-427408                                                                                                                                                                                                                        │ cert-options-427408       │ jenkins │ v1.37.0 │ 12 Dec 25 20:05 UTC │ 12 Dec 25 20:05 UTC │
	│ pause   │ -p pause-243084 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-243084              │ jenkins │ v1.37.0 │ 12 Dec 25 20:05 UTC │                     │
	│ start   │ -p kubernetes-upgrade-991615 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-991615 │ jenkins │ v1.37.0 │ 12 Dec 25 20:05 UTC │ 12 Dec 25 20:05 UTC │
	│ delete  │ -p pause-243084                                                                                                                                                                                                                               │ pause-243084              │ jenkins │ v1.37.0 │ 12 Dec 25 20:05 UTC │ 12 Dec 25 20:05 UTC │
	│ start   │ -p stopped-upgrade-180826 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                                                                                                                          │ stopped-upgrade-180826    │ jenkins │ v1.35.0 │ 12 Dec 25 20:05 UTC │ 12 Dec 25 20:06 UTC │
	│ stop    │ -p kubernetes-upgrade-991615                                                                                                                                                                                                                  │ kubernetes-upgrade-991615 │ jenkins │ v1.37.0 │ 12 Dec 25 20:05 UTC │ 12 Dec 25 20:06 UTC │
	│ stop    │ stopped-upgrade-180826 stop                                                                                                                                                                                                                   │ stopped-upgrade-180826    │ jenkins │ v1.35.0 │ 12 Dec 25 20:06 UTC │ 12 Dec 25 20:06 UTC │
	│ start   │ -p stopped-upgrade-180826 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ stopped-upgrade-180826    │ jenkins │ v1.37.0 │ 12 Dec 25 20:06 UTC │                     │
	│ start   │ -p kubernetes-upgrade-991615 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                               │ kubernetes-upgrade-991615 │ jenkins │ v1.37.0 │ 12 Dec 25 20:06 UTC │                     │
	│ delete  │ -p running-upgrade-569692                                                                                                                                                                                                                     │ running-upgrade-569692    │ jenkins │ v1.37.0 │ 12 Dec 25 20:08 UTC │ 12 Dec 25 20:08 UTC │
	│ start   │ -p old-k8s-version-824670 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-824670    │ jenkins │ v1.37.0 │ 12 Dec 25 20:08 UTC │ 12 Dec 25 20:09 UTC │
	│ start   │ -p cert-expiration-070436 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-070436    │ jenkins │ v1.37.0 │ 12 Dec 25 20:08 UTC │ 12 Dec 25 20:08 UTC │
	│ delete  │ -p cert-expiration-070436                                                                                                                                                                                                                     │ cert-expiration-070436    │ jenkins │ v1.37.0 │ 12 Dec 25 20:08 UTC │ 12 Dec 25 20:08 UTC │
	│ start   │ -p no-preload-753103 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-753103         │ jenkins │ v1.37.0 │ 12 Dec 25 20:08 UTC │ 12 Dec 25 20:09 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-824670 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-824670    │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │                     │
	│ stop    │ -p old-k8s-version-824670 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-824670    │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-753103 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-753103         │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:08:31
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:08:31.121762  265161 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:08:31.121955  265161 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:08:31.121966  265161 out.go:374] Setting ErrFile to fd 2...
	I1212 20:08:31.121973  265161 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:08:31.122187  265161 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 20:08:31.122771  265161 out.go:368] Setting JSON to false
	I1212 20:08:31.124154  265161 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3058,"bootTime":1765567053,"procs":360,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 20:08:31.124216  265161 start.go:143] virtualization: kvm guest
	I1212 20:08:31.126591  265161 out.go:179] * [no-preload-753103] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 20:08:31.127758  265161 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:08:31.127787  265161 notify.go:221] Checking for updates...
	I1212 20:08:31.130186  265161 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:08:31.131499  265161 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 20:08:31.132688  265161 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-5703/.minikube
	I1212 20:08:31.133911  265161 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 20:08:31.137455  265161 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:08:31.138970  265161 config.go:182] Loaded profile config "kubernetes-upgrade-991615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:08:31.139076  265161 config.go:182] Loaded profile config "old-k8s-version-824670": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1212 20:08:31.139143  265161 config.go:182] Loaded profile config "stopped-upgrade-180826": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1212 20:08:31.139228  265161 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:08:31.166766  265161 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1212 20:08:31.166897  265161 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:08:31.218894  265161 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-12 20:08:31.209872955 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 20:08:31.218994  265161 docker.go:319] overlay module found
	I1212 20:08:31.221397  265161 out.go:179] * Using the docker driver based on user configuration
	I1212 20:08:31.222502  265161 start.go:309] selected driver: docker
	I1212 20:08:31.222514  265161 start.go:927] validating driver "docker" against <nil>
	I1212 20:08:31.222525  265161 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:08:31.223046  265161 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:08:31.278338  265161 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-12 20:08:31.268478912 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 20:08:31.278479  265161 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 20:08:31.278674  265161 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 20:08:31.280187  265161 out.go:179] * Using Docker driver with root privileges
	I1212 20:08:31.281231  265161 cni.go:84] Creating CNI manager for ""
	I1212 20:08:31.281308  265161 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:08:31.281320  265161 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 20:08:31.281373  265161 start.go:353] cluster config:
	{Name:no-preload-753103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-753103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:08:31.282485  265161 out.go:179] * Starting "no-preload-753103" primary control-plane node in "no-preload-753103" cluster
	I1212 20:08:31.283344  265161 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 20:08:31.284391  265161 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 20:08:31.285310  265161 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 20:08:31.285385  265161 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 20:08:31.285404  265161 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/config.json ...
	I1212 20:08:31.285426  265161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/config.json: {Name:mkd8a2177844ac0db49bb2822f66a51efdeb8945 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:08:31.285585  265161 cache.go:107] acquiring lock: {Name:mkd03888e9d28c9db065b51c032322735ca0cefa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:08:31.285624  265161 cache.go:107] acquiring lock: {Name:mk459cb9c4c0f7c593fd5037410787d5ad4d4a42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:08:31.285618  265161 cache.go:107] acquiring lock: {Name:mk6749e52897d345dd08e6cd0c23af395805aa99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:08:31.285623  265161 cache.go:107] acquiring lock: {Name:mkbd6b49ab9e482ef9676c3a800f255aea55c704 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:08:31.285710  265161 cache.go:107] acquiring lock: {Name:mk2510ba3b96b784848e2843cf1d744743c7eaf9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:08:31.285755  265161 cache.go:115] /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1212 20:08:31.285746  265161 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1212 20:08:31.285769  265161 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 20:08:31.285768  265161 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 145.537µs
	I1212 20:08:31.285795  265161 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1212 20:08:31.285730  265161 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 20:08:31.285832  265161 cache.go:107] acquiring lock: {Name:mk0a87bae71250db2df2add52f55e5948ddda9b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:08:31.285930  265161 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 20:08:31.285971  265161 cache.go:115] /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1212 20:08:31.285584  265161 cache.go:107] acquiring lock: {Name:mka236661706a3579df9020867bc2d663aaca30d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:08:31.285981  265161 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 196.727µs
	I1212 20:08:31.286003  265161 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1212 20:08:31.285595  265161 cache.go:107] acquiring lock: {Name:mk82c937e9f82a7a532182865f786f0506a4e889 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:08:31.286142  265161 cache.go:115] /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1212 20:08:31.286159  265161 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 583.078µs
	I1212 20:08:31.286171  265161 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1212 20:08:31.286192  265161 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 20:08:31.287080  265161 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 20:08:31.287074  265161 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 20:08:31.287079  265161 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 20:08:31.287076  265161 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1212 20:08:31.287134  265161 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 20:08:31.306096  265161 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 20:08:31.306111  265161 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 20:08:31.306125  265161 cache.go:243] Successfully downloaded all kic artifacts
	I1212 20:08:31.306149  265161 start.go:360] acquireMachinesLock for no-preload-753103: {Name:mk75e497173a23050868488b8602a26938335e69 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:08:31.306219  265161 start.go:364] duration metric: took 56.487µs to acquireMachinesLock for "no-preload-753103"
	I1212 20:08:31.306239  265161 start.go:93] Provisioning new machine with config: &{Name:no-preload-753103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-753103 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:08:31.306336  265161 start.go:125] createHost starting for "" (driver="docker")
	I1212 20:08:30.772434  260486 cli_runner.go:164] Run: docker network inspect old-k8s-version-824670 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:08:30.789456  260486 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1212 20:08:30.793651  260486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:08:30.803696  260486 kubeadm.go:884] updating cluster {Name:old-k8s-version-824670 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-824670 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 20:08:30.803831  260486 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1212 20:08:30.803887  260486 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:08:30.833507  260486 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:08:30.833525  260486 crio.go:433] Images already preloaded, skipping extraction
	I1212 20:08:30.833569  260486 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:08:30.859561  260486 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:08:30.859578  260486 cache_images.go:86] Images are preloaded, skipping loading
	I1212 20:08:30.859587  260486 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.28.0 crio true true} ...
	I1212 20:08:30.859660  260486 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-824670 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-824670 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 20:08:30.859718  260486 ssh_runner.go:195] Run: crio config
	I1212 20:08:30.909806  260486 cni.go:84] Creating CNI manager for ""
	I1212 20:08:30.909829  260486 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:08:30.909846  260486 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 20:08:30.909865  260486 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-824670 NodeName:old-k8s-version-824670 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:08:30.909984  260486 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-824670"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:08:30.910038  260486 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1212 20:08:30.927970  260486 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 20:08:30.928031  260486 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 20:08:30.935701  260486 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1212 20:08:30.948106  260486 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 20:08:30.966056  260486 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1212 20:08:30.977843  260486 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1212 20:08:30.981240  260486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:08:30.990854  260486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:08:31.074196  260486 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:08:31.096647  260486 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670 for IP: 192.168.94.2
	I1212 20:08:31.096667  260486 certs.go:195] generating shared ca certs ...
	I1212 20:08:31.096684  260486 certs.go:227] acquiring lock for ca certs: {Name:mk2b2b547b64ae18c808f2dedd3d3cb8fa4a59ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:08:31.096817  260486 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-5703/.minikube/ca.key
	I1212 20:08:31.096872  260486 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.key
	I1212 20:08:31.096885  260486 certs.go:257] generating profile certs ...
	I1212 20:08:31.096951  260486 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/client.key
	I1212 20:08:31.096976  260486 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/client.crt with IP's: []
	I1212 20:08:31.192438  260486 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/client.crt ...
	I1212 20:08:31.192469  260486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/client.crt: {Name:mk4c392339d0b9d3aa04bd97e3fb072c90819343 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:08:31.192669  260486 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/client.key ...
	I1212 20:08:31.192691  260486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/client.key: {Name:mke0d0cd7cd4d72fa8714feae70851928bd527b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:08:31.192815  260486 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/apiserver.key.e581b2fa
	I1212 20:08:31.192840  260486 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/apiserver.crt.e581b2fa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1212 20:08:31.330627  260486 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/apiserver.crt.e581b2fa ...
	I1212 20:08:31.330656  260486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/apiserver.crt.e581b2fa: {Name:mk7e495d4963b24e297aa0a63e83e07a95cc593d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:08:31.330797  260486 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/apiserver.key.e581b2fa ...
	I1212 20:08:31.330820  260486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/apiserver.key.e581b2fa: {Name:mk911bdcbf5d97cdec932d03c3a8dfc9d8038cd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:08:31.330947  260486 certs.go:382] copying /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/apiserver.crt.e581b2fa -> /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/apiserver.crt
	I1212 20:08:31.331039  260486 certs.go:386] copying /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/apiserver.key.e581b2fa -> /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/apiserver.key
	I1212 20:08:31.331126  260486 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/proxy-client.key
	I1212 20:08:31.331149  260486 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/proxy-client.crt with IP's: []
	I1212 20:08:31.369158  260486 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/proxy-client.crt ...
	I1212 20:08:31.369188  260486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/proxy-client.crt: {Name:mk3b92e9fad6e762611d14414653f488eb2e03a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:08:31.369379  260486 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/proxy-client.key ...
	I1212 20:08:31.369400  260486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/proxy-client.key: {Name:mkd01137ff6e745dad2c96283b958e8c28f025b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:08:31.369629  260486 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/9254.pem (1338 bytes)
	W1212 20:08:31.369690  260486 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-5703/.minikube/certs/9254_empty.pem, impossibly tiny 0 bytes
	I1212 20:08:31.369707  260486 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 20:08:31.369756  260486 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem (1078 bytes)
	I1212 20:08:31.369807  260486 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:08:31.369843  260486 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/key.pem (1679 bytes)
	I1212 20:08:31.369907  260486 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/ssl/certs/92542.pem (1708 bytes)
	I1212 20:08:31.370813  260486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:08:31.392387  260486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 20:08:31.410586  260486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:08:31.427597  260486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 20:08:31.444678  260486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1212 20:08:31.462420  260486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 20:08:31.484025  260486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:08:31.502883  260486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 20:08:31.519509  260486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/ssl/certs/92542.pem --> /usr/share/ca-certificates/92542.pem (1708 bytes)
	I1212 20:08:31.539291  260486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:08:31.556739  260486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/certs/9254.pem --> /usr/share/ca-certificates/9254.pem (1338 bytes)
	I1212 20:08:31.575731  260486 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 20:08:31.589501  260486 ssh_runner.go:195] Run: openssl version
	I1212 20:08:31.596204  260486 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/92542.pem
	I1212 20:08:31.603839  260486 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/92542.pem /etc/ssl/certs/92542.pem
	I1212 20:08:31.613484  260486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92542.pem
	I1212 20:08:31.618313  260486 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:38 /usr/share/ca-certificates/92542.pem
	I1212 20:08:31.618379  260486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92542.pem
	I1212 20:08:31.670580  260486 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 20:08:31.678982  260486 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/92542.pem /etc/ssl/certs/3ec20f2e.0
	I1212 20:08:31.687044  260486 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:08:31.696350  260486 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 20:08:31.710063  260486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:08:31.714247  260486 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:08:31.714316  260486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:08:31.752872  260486 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 20:08:31.763140  260486 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 20:08:31.772375  260486 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9254.pem
	I1212 20:08:31.781503  260486 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9254.pem /etc/ssl/certs/9254.pem
	I1212 20:08:31.793987  260486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9254.pem
	I1212 20:08:31.798570  260486 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:38 /usr/share/ca-certificates/9254.pem
	I1212 20:08:31.798624  260486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9254.pem
	I1212 20:08:31.841677  260486 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 20:08:31.849760  260486 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9254.pem /etc/ssl/certs/51391683.0
	I1212 20:08:31.857153  260486 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:08:31.860851  260486 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 20:08:31.860909  260486 kubeadm.go:401] StartCluster: {Name:old-k8s-version-824670 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-824670 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:08:31.860978  260486 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:08:31.861031  260486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:08:31.892029  260486 cri.go:89] found id: ""
	I1212 20:08:31.892096  260486 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 20:08:31.901437  260486 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 20:08:31.909691  260486 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 20:08:31.909756  260486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:08:31.917796  260486 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 20:08:31.917815  260486 kubeadm.go:158] found existing configuration files:
	
	I1212 20:08:31.917871  260486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 20:08:31.926266  260486 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 20:08:31.926385  260486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 20:08:31.934965  260486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 20:08:31.942956  260486 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 20:08:31.943009  260486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:08:31.950203  260486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 20:08:31.957930  260486 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 20:08:31.957976  260486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:08:31.965478  260486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 20:08:31.973421  260486 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 20:08:31.973464  260486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:08:31.981528  260486 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 20:08:32.035028  260486 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1212 20:08:32.035194  260486 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 20:08:32.086191  260486 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 20:08:32.086318  260486 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1212 20:08:32.086402  260486 kubeadm.go:319] OS: Linux
	I1212 20:08:32.086503  260486 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 20:08:32.086594  260486 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 20:08:32.086708  260486 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 20:08:32.086796  260486 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 20:08:32.086861  260486 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 20:08:32.086936  260486 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 20:08:32.086999  260486 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 20:08:32.087080  260486 kubeadm.go:319] CGROUPS_IO: enabled
	I1212 20:08:32.169567  260486 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 20:08:32.169691  260486 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 20:08:32.169826  260486 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 20:08:32.337887  260486 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 20:08:29.693371  244825 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 20:08:29.693762  244825 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 20:08:29.693817  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:08:29.693879  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:08:29.727809  244825 cri.go:89] found id: "ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:08:29.727836  244825 cri.go:89] found id: ""
	I1212 20:08:29.727845  244825 logs.go:282] 1 containers: [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c]
	I1212 20:08:29.727900  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:29.731629  244825 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:08:29.731683  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:08:29.765822  244825 cri.go:89] found id: ""
	I1212 20:08:29.765846  244825 logs.go:282] 0 containers: []
	W1212 20:08:29.765856  244825 logs.go:284] No container was found matching "etcd"
	I1212 20:08:29.765863  244825 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:08:29.765914  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:08:29.800777  244825 cri.go:89] found id: ""
	I1212 20:08:29.800799  244825 logs.go:282] 0 containers: []
	W1212 20:08:29.800807  244825 logs.go:284] No container was found matching "coredns"
	I1212 20:08:29.800814  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:08:29.800865  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:08:29.836516  244825 cri.go:89] found id: "8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:08:29.836537  244825 cri.go:89] found id: ""
	I1212 20:08:29.836547  244825 logs.go:282] 1 containers: [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08]
	I1212 20:08:29.836608  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:29.840199  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:08:29.840249  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:08:29.878352  244825 cri.go:89] found id: ""
	I1212 20:08:29.878378  244825 logs.go:282] 0 containers: []
	W1212 20:08:29.878389  244825 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:29.878397  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:08:29.878445  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:08:29.917800  244825 cri.go:89] found id: "509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:08:29.917820  244825 cri.go:89] found id: "2212d82eda0761e0cee45e73bdefc45434bdfe80e6af42ef1304e448dc31b61d"
	I1212 20:08:29.917824  244825 cri.go:89] found id: ""
	I1212 20:08:29.917831  244825 logs.go:282] 2 containers: [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56 2212d82eda0761e0cee45e73bdefc45434bdfe80e6af42ef1304e448dc31b61d]
	I1212 20:08:29.917872  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:29.921527  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:29.924831  244825 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:08:29.924885  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:08:29.957241  244825 cri.go:89] found id: ""
	I1212 20:08:29.957262  244825 logs.go:282] 0 containers: []
	W1212 20:08:29.957285  244825 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:29.957294  244825 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:08:29.957346  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:08:29.991264  244825 cri.go:89] found id: ""
	I1212 20:08:29.991298  244825 logs.go:282] 0 containers: []
	W1212 20:08:29.991307  244825 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:08:29.991323  244825 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:29.991340  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:30.011806  244825 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:30.011836  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:30.080991  244825 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:30.081019  244825 logs.go:123] Gathering logs for kube-apiserver [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c] ...
	I1212 20:08:30.081034  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:08:30.124508  244825 logs.go:123] Gathering logs for kube-scheduler [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08] ...
	I1212 20:08:30.124539  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:08:30.213637  244825 logs.go:123] Gathering logs for kube-controller-manager [2212d82eda0761e0cee45e73bdefc45434bdfe80e6af42ef1304e448dc31b61d] ...
	I1212 20:08:30.213680  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2212d82eda0761e0cee45e73bdefc45434bdfe80e6af42ef1304e448dc31b61d"
	I1212 20:08:30.258769  244825 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:08:30.258798  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:08:30.306688  244825 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:30.306713  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:30.396238  244825 logs.go:123] Gathering logs for kube-controller-manager [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56] ...
	I1212 20:08:30.396266  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:08:30.430845  244825 logs.go:123] Gathering logs for container status ...
	I1212 20:08:30.430866  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:32.982013  244825 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 20:08:32.982369  244825 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 20:08:32.982419  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:08:32.982464  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:08:33.022928  244825 cri.go:89] found id: "ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:08:33.022949  244825 cri.go:89] found id: ""
	I1212 20:08:33.022959  244825 logs.go:282] 1 containers: [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c]
	I1212 20:08:33.023014  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:33.026655  244825 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:08:33.026717  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:08:33.062000  244825 cri.go:89] found id: ""
	I1212 20:08:33.062025  244825 logs.go:282] 0 containers: []
	W1212 20:08:33.062034  244825 logs.go:284] No container was found matching "etcd"
	I1212 20:08:33.062043  244825 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:08:33.062091  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:08:32.340117  260486 out.go:252]   - Generating certificates and keys ...
	I1212 20:08:32.340252  260486 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 20:08:32.340401  260486 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 20:08:32.456742  260486 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 20:08:32.638722  260486 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 20:08:32.772991  260486 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 20:08:32.881081  260486 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 20:08:33.278990  260486 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 20:08:33.279186  260486 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-824670] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1212 20:08:33.368521  260486 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 20:08:33.368733  260486 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-824670] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1212 20:08:33.585827  260486 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 20:08:33.670440  260486 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 20:08:33.853231  260486 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 20:08:33.853347  260486 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 20:08:33.955038  260486 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 20:08:29.269957  245478 logs.go:123] Gathering logs for container status ...
	I1212 20:08:29.269989  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:29.304137  245478 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:29.304171  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:29.317977  245478 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:29.318003  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:29.371837  245478 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:29.371857  245478 logs.go:123] Gathering logs for kube-scheduler [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849] ...
	I1212 20:08:29.371873  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:08:29.396860  245478 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:29.396890  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:31.977329  245478 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 20:08:31.977686  245478 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1212 20:08:31.977760  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:08:31.977818  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:08:32.006001  245478 cri.go:89] found id: "ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:08:32.006023  245478 cri.go:89] found id: ""
	I1212 20:08:32.006031  245478 logs.go:282] 1 containers: [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998]
	I1212 20:08:32.006087  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:32.011107  245478 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:08:32.011173  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:08:32.046300  245478 cri.go:89] found id: ""
	I1212 20:08:32.046325  245478 logs.go:282] 0 containers: []
	W1212 20:08:32.046336  245478 logs.go:284] No container was found matching "etcd"
	I1212 20:08:32.046344  245478 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:08:32.046404  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:08:32.086505  245478 cri.go:89] found id: ""
	I1212 20:08:32.086525  245478 logs.go:282] 0 containers: []
	W1212 20:08:32.086536  245478 logs.go:284] No container was found matching "coredns"
	I1212 20:08:32.086546  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:08:32.086599  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:08:32.126101  245478 cri.go:89] found id: "02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:08:32.126128  245478 cri.go:89] found id: ""
	I1212 20:08:32.126137  245478 logs.go:282] 1 containers: [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849]
	I1212 20:08:32.126181  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:32.130564  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:08:32.130629  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:08:32.162040  245478 cri.go:89] found id: ""
	I1212 20:08:32.162061  245478 logs.go:282] 0 containers: []
	W1212 20:08:32.162068  245478 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:32.162075  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:08:32.162131  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:08:32.191747  245478 cri.go:89] found id: "4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:08:32.191769  245478 cri.go:89] found id: ""
	I1212 20:08:32.191781  245478 logs.go:282] 1 containers: [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5]
	I1212 20:08:32.191846  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:32.196548  245478 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:08:32.196616  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:08:32.225129  245478 cri.go:89] found id: ""
	I1212 20:08:32.225156  245478 logs.go:282] 0 containers: []
	W1212 20:08:32.225172  245478 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:32.225178  245478 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:08:32.225223  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:08:32.254260  245478 cri.go:89] found id: ""
	I1212 20:08:32.254306  245478 logs.go:282] 0 containers: []
	W1212 20:08:32.254317  245478 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:08:32.254328  245478 logs.go:123] Gathering logs for container status ...
	I1212 20:08:32.254344  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:32.291476  245478 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:32.291506  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:32.387382  245478 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:32.387414  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:32.407725  245478 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:32.407753  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:32.475154  245478 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:32.475178  245478 logs.go:123] Gathering logs for kube-apiserver [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998] ...
	I1212 20:08:32.475198  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:08:32.511651  245478 logs.go:123] Gathering logs for kube-scheduler [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849] ...
	I1212 20:08:32.511682  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:08:32.546814  245478 logs.go:123] Gathering logs for kube-controller-manager [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5] ...
	I1212 20:08:32.546846  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:08:32.592402  245478 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:08:32.592437  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:08:34.278923  260486 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 20:08:34.442322  260486 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 20:08:34.597138  260486 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 20:08:34.597835  260486 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 20:08:34.601605  260486 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 20:08:31.307992  265161 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1212 20:08:31.308188  265161 start.go:159] libmachine.API.Create for "no-preload-753103" (driver="docker")
	I1212 20:08:31.308217  265161 client.go:173] LocalClient.Create starting
	I1212 20:08:31.308285  265161 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem
	I1212 20:08:31.308326  265161 main.go:143] libmachine: Decoding PEM data...
	I1212 20:08:31.308346  265161 main.go:143] libmachine: Parsing certificate...
	I1212 20:08:31.308413  265161 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem
	I1212 20:08:31.308445  265161 main.go:143] libmachine: Decoding PEM data...
	I1212 20:08:31.308464  265161 main.go:143] libmachine: Parsing certificate...
	I1212 20:08:31.308766  265161 cli_runner.go:164] Run: docker network inspect no-preload-753103 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 20:08:31.325247  265161 cli_runner.go:211] docker network inspect no-preload-753103 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 20:08:31.325324  265161 network_create.go:284] running [docker network inspect no-preload-753103] to gather additional debugging logs...
	I1212 20:08:31.325341  265161 cli_runner.go:164] Run: docker network inspect no-preload-753103
	W1212 20:08:31.342776  265161 cli_runner.go:211] docker network inspect no-preload-753103 returned with exit code 1
	I1212 20:08:31.342798  265161 network_create.go:287] error running [docker network inspect no-preload-753103]: docker network inspect no-preload-753103: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-753103 not found
	I1212 20:08:31.342808  265161 network_create.go:289] output of [docker network inspect no-preload-753103]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-753103 not found
	
	** /stderr **
	I1212 20:08:31.342875  265161 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:08:31.360868  265161 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-74442dadd84e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:ff:80:da:a9:72} reservation:<nil>}
	I1212 20:08:31.361566  265161 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-26148288ab51 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:82:49:cc:21:29:a7} reservation:<nil>}
	I1212 20:08:31.362255  265161 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3684d3b926aa IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2a:5e:c7:18:99:d2} reservation:<nil>}
	I1212 20:08:31.362898  265161 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-09b123768b60 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:2e:6c:50:8a:dd:de} reservation:<nil>}
	I1212 20:08:31.363656  265161 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021e7ff0}
	I1212 20:08:31.363681  265161 network_create.go:124] attempt to create docker network no-preload-753103 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1212 20:08:31.363714  265161 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-753103 no-preload-753103
	I1212 20:08:31.412633  265161 network_create.go:108] docker network no-preload-753103 192.168.85.0/24 created
	I1212 20:08:31.412659  265161 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-753103" container
	I1212 20:08:31.412703  265161 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 20:08:31.430605  265161 cli_runner.go:164] Run: docker volume create no-preload-753103 --label name.minikube.sigs.k8s.io=no-preload-753103 --label created_by.minikube.sigs.k8s.io=true
	I1212 20:08:31.448180  265161 oci.go:103] Successfully created a docker volume no-preload-753103
	I1212 20:08:31.448243  265161 cli_runner.go:164] Run: docker run --rm --name no-preload-753103-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-753103 --entrypoint /usr/bin/test -v no-preload-753103:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -d /var/lib
	I1212 20:08:31.467146  265161 cache.go:162] opening:  /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1212 20:08:31.482460  265161 cache.go:162] opening:  /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1212 20:08:31.498254  265161 cache.go:162] opening:  /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1212 20:08:31.505634  265161 cache.go:162] opening:  /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1212 20:08:31.584799  265161 cache.go:162] opening:  /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1212 20:08:31.838485  265161 cache.go:157] /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1212 20:08:31.838506  265161 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 552.91558ms
	I1212 20:08:31.838517  265161 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1212 20:08:31.868719  265161 oci.go:107] Successfully prepared a docker volume no-preload-753103
	I1212 20:08:31.868756  265161 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	W1212 20:08:31.868817  265161 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1212 20:08:31.868845  265161 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1212 20:08:31.868877  265161 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 20:08:31.924343  265161 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-753103 --name no-preload-753103 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-753103 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-753103 --network no-preload-753103 --ip 192.168.85.2 --volume no-preload-753103:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138
	I1212 20:08:32.238512  265161 cli_runner.go:164] Run: docker container inspect no-preload-753103 --format={{.State.Running}}
	I1212 20:08:32.262014  265161 cli_runner.go:164] Run: docker container inspect no-preload-753103 --format={{.State.Status}}
	I1212 20:08:32.284575  265161 cli_runner.go:164] Run: docker exec no-preload-753103 stat /var/lib/dpkg/alternatives/iptables
	I1212 20:08:32.338359  265161 oci.go:144] the created container "no-preload-753103" has a running status.
	I1212 20:08:32.338390  265161 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22112-5703/.minikube/machines/no-preload-753103/id_rsa...
	I1212 20:08:32.534469  265161 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22112-5703/.minikube/machines/no-preload-753103/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 20:08:32.573620  265161 cli_runner.go:164] Run: docker container inspect no-preload-753103 --format={{.State.Status}}
	I1212 20:08:32.597764  265161 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 20:08:32.597787  265161 kic_runner.go:114] Args: [docker exec --privileged no-preload-753103 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 20:08:32.648326  265161 cli_runner.go:164] Run: docker container inspect no-preload-753103 --format={{.State.Status}}
	I1212 20:08:32.670658  265161 machine.go:94] provisionDockerMachine start ...
	I1212 20:08:32.670742  265161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-753103
	I1212 20:08:32.689406  265161 main.go:143] libmachine: Using SSH client type: native
	I1212 20:08:32.689735  265161 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1212 20:08:32.689756  265161 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 20:08:32.830264  265161 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-753103
	
	I1212 20:08:32.830304  265161 ubuntu.go:182] provisioning hostname "no-preload-753103"
	I1212 20:08:32.830367  265161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-753103
	I1212 20:08:32.858721  265161 main.go:143] libmachine: Using SSH client type: native
	I1212 20:08:32.859040  265161 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1212 20:08:32.859060  265161 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-753103 && echo "no-preload-753103" | sudo tee /etc/hostname
	I1212 20:08:32.875331  265161 cache.go:157] /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1212 20:08:32.875364  265161 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 1.589749443s
	I1212 20:08:32.875379  265161 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1212 20:08:32.881317  265161 cache.go:157] /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1212 20:08:32.881350  265161 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 1.595684967s
	I1212 20:08:32.881378  265161 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1212 20:08:32.957822  265161 cache.go:157] /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1212 20:08:32.957856  265161 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 1.672286348s
	I1212 20:08:32.957886  265161 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1212 20:08:33.014922  265161 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-753103
	
	I1212 20:08:33.015021  265161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-753103
	I1212 20:08:33.035796  265161 main.go:143] libmachine: Using SSH client type: native
	I1212 20:08:33.036002  265161 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1212 20:08:33.036019  265161 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-753103' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-753103/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-753103' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:08:33.169615  265161 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:08:33.169648  265161 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-5703/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-5703/.minikube}
	I1212 20:08:33.169687  265161 ubuntu.go:190] setting up certificates
	I1212 20:08:33.169699  265161 provision.go:84] configureAuth start
	I1212 20:08:33.169751  265161 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-753103
	I1212 20:08:33.189995  265161 provision.go:143] copyHostCerts
	I1212 20:08:33.190053  265161 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-5703/.minikube/ca.pem, removing ...
	I1212 20:08:33.190075  265161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-5703/.minikube/ca.pem
	I1212 20:08:33.190156  265161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-5703/.minikube/ca.pem (1078 bytes)
	I1212 20:08:33.190294  265161 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-5703/.minikube/cert.pem, removing ...
	I1212 20:08:33.190308  265161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-5703/.minikube/cert.pem
	I1212 20:08:33.190352  265161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-5703/.minikube/cert.pem (1123 bytes)
	I1212 20:08:33.190458  265161 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-5703/.minikube/key.pem, removing ...
	I1212 20:08:33.190468  265161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-5703/.minikube/key.pem
	I1212 20:08:33.190507  265161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-5703/.minikube/key.pem (1679 bytes)
	I1212 20:08:33.190594  265161 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-5703/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca-key.pem org=jenkins.no-preload-753103 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-753103]
	I1212 20:08:33.229229  265161 provision.go:177] copyRemoteCerts
	I1212 20:08:33.229283  265161 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:08:33.229329  265161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-753103
	I1212 20:08:33.248580  265161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/no-preload-753103/id_rsa Username:docker}
	I1212 20:08:33.345211  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 20:08:33.363952  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 20:08:33.380629  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 20:08:33.398286  265161 provision.go:87] duration metric: took 228.554889ms to configureAuth
	I1212 20:08:33.398309  265161 ubuntu.go:206] setting minikube options for container-runtime
	I1212 20:08:33.398494  265161 config.go:182] Loaded profile config "no-preload-753103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:08:33.398605  265161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-753103
	I1212 20:08:33.417177  265161 main.go:143] libmachine: Using SSH client type: native
	I1212 20:08:33.417454  265161 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1212 20:08:33.417475  265161 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 20:08:33.694149  265161 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 20:08:33.694178  265161 machine.go:97] duration metric: took 1.023499062s to provisionDockerMachine
	I1212 20:08:33.694189  265161 client.go:176] duration metric: took 2.385962679s to LocalClient.Create
	I1212 20:08:33.694210  265161 start.go:167] duration metric: took 2.386023665s to libmachine.API.Create "no-preload-753103"
	I1212 20:08:33.694220  265161 start.go:293] postStartSetup for "no-preload-753103" (driver="docker")
	I1212 20:08:33.694231  265161 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:08:33.694304  265161 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:08:33.694355  265161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-753103
	I1212 20:08:33.715072  265161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/no-preload-753103/id_rsa Username:docker}
	I1212 20:08:33.810312  265161 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:08:33.813499  265161 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 20:08:33.813526  265161 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 20:08:33.813538  265161 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-5703/.minikube/addons for local assets ...
	I1212 20:08:33.813593  265161 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-5703/.minikube/files for local assets ...
	I1212 20:08:33.813695  265161 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/ssl/certs/92542.pem -> 92542.pem in /etc/ssl/certs
	I1212 20:08:33.813808  265161 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 20:08:33.820814  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/ssl/certs/92542.pem --> /etc/ssl/certs/92542.pem (1708 bytes)
	I1212 20:08:33.839116  265161 start.go:296] duration metric: took 144.885326ms for postStartSetup
	I1212 20:08:33.839503  265161 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-753103
	I1212 20:08:33.857255  265161 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/config.json ...
	I1212 20:08:33.857490  265161 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:08:33.857527  265161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-753103
	I1212 20:08:33.874112  265161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/no-preload-753103/id_rsa Username:docker}
	I1212 20:08:33.964709  265161 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 20:08:33.968848  265161 start.go:128] duration metric: took 2.662499242s to createHost
	I1212 20:08:33.968871  265161 start.go:83] releasing machines lock for "no-preload-753103", held for 2.662641224s
	I1212 20:08:33.968929  265161 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-753103
	I1212 20:08:33.986327  265161 ssh_runner.go:195] Run: cat /version.json
	I1212 20:08:33.986336  265161 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 20:08:33.986380  265161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-753103
	I1212 20:08:33.986424  265161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-753103
	I1212 20:08:34.005017  265161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/no-preload-753103/id_rsa Username:docker}
	I1212 20:08:34.005321  265161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/no-preload-753103/id_rsa Username:docker}
	I1212 20:08:34.535796  265161 cache.go:157] /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1212 20:08:34.535826  265161 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 3.250202446s
	I1212 20:08:34.535841  265161 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1212 20:08:34.535860  265161 cache.go:87] Successfully saved all images to host disk.
	I1212 20:08:34.535934  265161 ssh_runner.go:195] Run: systemctl --version
	I1212 20:08:34.542541  265161 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 20:08:34.574147  265161 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 20:08:34.578493  265161 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:08:34.578555  265161 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:08:34.603039  265161 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 20:08:34.603058  265161 start.go:496] detecting cgroup driver to use...
	I1212 20:08:34.603088  265161 detect.go:190] detected "systemd" cgroup driver on host os
	I1212 20:08:34.603139  265161 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:08:34.618612  265161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:08:34.629638  265161 docker.go:218] disabling cri-docker service (if available) ...
	I1212 20:08:34.629688  265161 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 20:08:34.649322  265161 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 20:08:34.670309  265161 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 20:08:34.757021  265161 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 20:08:34.839106  265161 docker.go:234] disabling docker service ...
	I1212 20:08:34.839216  265161 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 20:08:34.856734  265161 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 20:08:34.867912  265161 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 20:08:34.953792  265161 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 20:08:35.033898  265161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:08:35.045359  265161 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:08:35.058662  265161 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 20:08:35.058716  265161 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:08:35.068502  265161 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1212 20:08:35.068551  265161 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:08:35.076672  265161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:08:35.084637  265161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:08:35.092663  265161 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 20:08:35.099940  265161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:08:35.107809  265161 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:08:35.120180  265161 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:08:35.128194  265161 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 20:08:35.135103  265161 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 20:08:35.141721  265161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:08:35.232884  265161 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 20:08:35.547357  265161 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 20:08:35.547425  265161 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 20:08:35.551225  265161 start.go:564] Will wait 60s for crictl version
	I1212 20:08:35.551269  265161 ssh_runner.go:195] Run: which crictl
	I1212 20:08:35.554699  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 20:08:35.580586  265161 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 20:08:35.580666  265161 ssh_runner.go:195] Run: crio --version
	I1212 20:08:35.610027  265161 ssh_runner.go:195] Run: crio --version
	I1212 20:08:35.650172  265161 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1212 20:08:35.651208  265161 cli_runner.go:164] Run: docker network inspect no-preload-753103 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:08:35.671802  265161 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1212 20:08:35.675834  265161 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:08:35.687522  265161 kubeadm.go:884] updating cluster {Name:no-preload-753103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-753103 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 20:08:35.687645  265161 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 20:08:35.687687  265161 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:08:35.720446  265161 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1212 20:08:35.720473  265161 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 20:08:35.720540  265161 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:08:35.720542  265161 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 20:08:35.720585  265161 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 20:08:35.720611  265161 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1212 20:08:35.720637  265161 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1212 20:08:35.720670  265161 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 20:08:35.720633  265161 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 20:08:35.720585  265161 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1212 20:08:35.721893  265161 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1212 20:08:35.722358  265161 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 20:08:35.721924  265161 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 20:08:35.722255  265161 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1212 20:08:35.722459  265161 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 20:08:35.722895  265161 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 20:08:35.722909  265161 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1212 20:08:35.723093  265161 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:08:35.893614  265161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1212 20:08:35.901450  265161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 20:08:35.904167  265161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 20:08:35.909060  265161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 20:08:35.934614  265161 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1212 20:08:35.934658  265161 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1212 20:08:35.934696  265161 ssh_runner.go:195] Run: which crictl
	I1212 20:08:35.940118  265161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 20:08:35.943652  265161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1212 20:08:35.945466  265161 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc" in container runtime
	I1212 20:08:35.945515  265161 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 20:08:35.945560  265161 ssh_runner.go:195] Run: which crictl
	I1212 20:08:35.949005  265161 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b" in container runtime
	I1212 20:08:35.949039  265161 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 20:08:35.949083  265161 ssh_runner.go:195] Run: which crictl
	I1212 20:08:35.953393  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1212 20:08:35.953487  265161 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46" in container runtime
	I1212 20:08:35.953517  265161 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 20:08:35.953545  265161 ssh_runner.go:195] Run: which crictl
	I1212 20:08:35.985670  265161 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810" in container runtime
	I1212 20:08:35.985713  265161 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 20:08:35.985759  265161 ssh_runner.go:195] Run: which crictl
	I1212 20:08:35.988631  265161 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1212 20:08:35.988663  265161 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1212 20:08:35.988701  265161 ssh_runner.go:195] Run: which crictl
	I1212 20:08:35.988727  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 20:08:35.988784  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 20:08:35.988849  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1212 20:08:35.988856  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 20:08:35.993812  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 20:08:36.029565  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 20:08:36.029626  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1212 20:08:36.029847  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 20:08:36.030614  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 20:08:36.030657  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1212 20:08:36.032882  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 20:08:36.077038  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 20:08:36.077106  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1212 20:08:36.077115  265161 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1212 20:08:36.077139  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 20:08:36.077106  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 20:08:36.077162  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 20:08:36.077204  265161 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1212 20:08:36.118371  265161 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1212 20:08:36.118469  265161 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1212 20:08:36.118902  265161 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1212 20:08:36.118971  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1212 20:08:36.118991  265161 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1212 20:08:33.099183  244825 cri.go:89] found id: ""
	I1212 20:08:33.099216  244825 logs.go:282] 0 containers: []
	W1212 20:08:33.099225  244825 logs.go:284] No container was found matching "coredns"
	I1212 20:08:33.099230  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:08:33.099297  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:08:33.136038  244825 cri.go:89] found id: "8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:08:33.136060  244825 cri.go:89] found id: ""
	I1212 20:08:33.136068  244825 logs.go:282] 1 containers: [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08]
	I1212 20:08:33.136114  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:33.140063  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:08:33.140115  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:08:33.181121  244825 cri.go:89] found id: ""
	I1212 20:08:33.181146  244825 logs.go:282] 0 containers: []
	W1212 20:08:33.181153  244825 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:33.181159  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:08:33.181211  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:08:33.216582  244825 cri.go:89] found id: "509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:08:33.216603  244825 cri.go:89] found id: ""
	I1212 20:08:33.216613  244825 logs.go:282] 1 containers: [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56]
	I1212 20:08:33.216655  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:33.220373  244825 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:08:33.220433  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:08:33.257479  244825 cri.go:89] found id: ""
	I1212 20:08:33.257497  244825 logs.go:282] 0 containers: []
	W1212 20:08:33.257504  244825 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:33.257512  244825 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:08:33.257552  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:08:33.292293  244825 cri.go:89] found id: ""
	I1212 20:08:33.292319  244825 logs.go:282] 0 containers: []
	W1212 20:08:33.292329  244825 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:08:33.292340  244825 logs.go:123] Gathering logs for kube-scheduler [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08] ...
	I1212 20:08:33.292360  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:08:33.360026  244825 logs.go:123] Gathering logs for kube-controller-manager [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56] ...
	I1212 20:08:33.360053  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:08:33.399678  244825 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:08:33.399705  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:08:33.448360  244825 logs.go:123] Gathering logs for container status ...
	I1212 20:08:33.448385  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:33.490145  244825 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:33.490173  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:33.594524  244825 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:33.594554  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:33.611592  244825 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:33.611625  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:33.671922  244825 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:33.671945  244825 logs.go:123] Gathering logs for kube-apiserver [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c] ...
	I1212 20:08:33.671960  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:08:36.215338  244825 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 20:08:36.216243  244825 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 20:08:36.216319  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:08:36.216377  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:08:36.259376  244825 cri.go:89] found id: "ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:08:36.259405  244825 cri.go:89] found id: ""
	I1212 20:08:36.259416  244825 logs.go:282] 1 containers: [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c]
	I1212 20:08:36.259475  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:36.264302  244825 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:08:36.264367  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:08:36.310504  244825 cri.go:89] found id: ""
	I1212 20:08:36.310527  244825 logs.go:282] 0 containers: []
	W1212 20:08:36.310540  244825 logs.go:284] No container was found matching "etcd"
	I1212 20:08:36.310548  244825 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:08:36.310598  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:08:36.359952  244825 cri.go:89] found id: ""
	I1212 20:08:36.359980  244825 logs.go:282] 0 containers: []
	W1212 20:08:36.359991  244825 logs.go:284] No container was found matching "coredns"
	I1212 20:08:36.359999  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:08:36.360056  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:08:36.413009  244825 cri.go:89] found id: "8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:08:36.413041  244825 cri.go:89] found id: ""
	I1212 20:08:36.413051  244825 logs.go:282] 1 containers: [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08]
	I1212 20:08:36.413109  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:36.418548  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:08:36.418615  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:08:36.469051  244825 cri.go:89] found id: ""
	I1212 20:08:36.469093  244825 logs.go:282] 0 containers: []
	W1212 20:08:36.469103  244825 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:36.469111  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:08:36.469174  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:08:36.526264  244825 cri.go:89] found id: "509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:08:36.526295  244825 cri.go:89] found id: ""
	I1212 20:08:36.526305  244825 logs.go:282] 1 containers: [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56]
	I1212 20:08:36.526359  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:36.531961  244825 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:08:36.532028  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:08:36.583971  244825 cri.go:89] found id: ""
	I1212 20:08:36.584000  244825 logs.go:282] 0 containers: []
	W1212 20:08:36.584010  244825 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:36.584018  244825 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:08:36.584089  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:08:36.632000  244825 cri.go:89] found id: ""
	I1212 20:08:36.632027  244825 logs.go:282] 0 containers: []
	W1212 20:08:36.632037  244825 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:08:36.632048  244825 logs.go:123] Gathering logs for container status ...
	I1212 20:08:36.632070  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:36.676522  244825 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:36.676548  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:36.821545  244825 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:36.821581  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:36.840830  244825 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:36.840859  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:36.917924  244825 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:36.917946  244825 logs.go:123] Gathering logs for kube-apiserver [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c] ...
	I1212 20:08:36.917961  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:08:36.958297  244825 logs.go:123] Gathering logs for kube-scheduler [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08] ...
	I1212 20:08:36.958323  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:08:37.046321  244825 logs.go:123] Gathering logs for kube-controller-manager [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56] ...
	I1212 20:08:37.046353  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:08:37.085078  244825 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:08:37.085106  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:08:34.603237  260486 out.go:252]   - Booting up control plane ...
	I1212 20:08:34.603383  260486 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 20:08:34.603501  260486 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 20:08:34.604085  260486 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 20:08:34.617362  260486 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:08:34.618216  260486 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:08:34.618310  260486 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 20:08:34.730506  260486 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 20:08:35.173427  245478 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 20:08:35.173863  245478 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1212 20:08:35.173912  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:08:35.173951  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:08:35.207292  245478 cri.go:89] found id: "ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:08:35.207315  245478 cri.go:89] found id: ""
	I1212 20:08:35.207325  245478 logs.go:282] 1 containers: [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998]
	I1212 20:08:35.207385  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:35.211116  245478 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:08:35.211169  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:08:35.236404  245478 cri.go:89] found id: ""
	I1212 20:08:35.236428  245478 logs.go:282] 0 containers: []
	W1212 20:08:35.236438  245478 logs.go:284] No container was found matching "etcd"
	I1212 20:08:35.236445  245478 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:08:35.236492  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:08:35.262102  245478 cri.go:89] found id: ""
	I1212 20:08:35.262127  245478 logs.go:282] 0 containers: []
	W1212 20:08:35.262137  245478 logs.go:284] No container was found matching "coredns"
	I1212 20:08:35.262143  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:08:35.262185  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:08:35.286330  245478 cri.go:89] found id: "02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:08:35.286346  245478 cri.go:89] found id: ""
	I1212 20:08:35.286354  245478 logs.go:282] 1 containers: [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849]
	I1212 20:08:35.286399  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:35.290212  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:08:35.290258  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:08:35.317607  245478 cri.go:89] found id: ""
	I1212 20:08:35.317631  245478 logs.go:282] 0 containers: []
	W1212 20:08:35.317642  245478 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:35.317656  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:08:35.317702  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:08:35.343703  245478 cri.go:89] found id: "4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:08:35.343726  245478 cri.go:89] found id: ""
	I1212 20:08:35.343736  245478 logs.go:282] 1 containers: [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5]
	I1212 20:08:35.343780  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:35.347432  245478 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:08:35.347483  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:08:35.371903  245478 cri.go:89] found id: ""
	I1212 20:08:35.371933  245478 logs.go:282] 0 containers: []
	W1212 20:08:35.371940  245478 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:35.371948  245478 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:08:35.371986  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:08:35.396117  245478 cri.go:89] found id: ""
	I1212 20:08:35.396138  245478 logs.go:282] 0 containers: []
	W1212 20:08:35.396146  245478 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:08:35.396155  245478 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:35.396165  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:35.478108  245478 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:35.478137  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:35.492265  245478 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:35.492315  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:35.546987  245478 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:35.547004  245478 logs.go:123] Gathering logs for kube-apiserver [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998] ...
	I1212 20:08:35.547024  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:08:35.582658  245478 logs.go:123] Gathering logs for kube-scheduler [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849] ...
	I1212 20:08:35.582684  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:08:35.610932  245478 logs.go:123] Gathering logs for kube-controller-manager [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5] ...
	I1212 20:08:35.610969  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:08:35.644808  245478 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:08:35.644845  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:08:35.708461  245478 logs.go:123] Gathering logs for container status ...
	I1212 20:08:35.708494  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:38.251991  245478 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 20:08:38.252425  245478 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1212 20:08:38.252477  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:08:38.252531  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:08:38.278786  245478 cri.go:89] found id: "ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:08:38.278805  245478 cri.go:89] found id: ""
	I1212 20:08:38.278815  245478 logs.go:282] 1 containers: [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998]
	I1212 20:08:38.278866  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:38.282629  245478 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:08:38.282687  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:08:38.308302  245478 cri.go:89] found id: ""
	I1212 20:08:38.308323  245478 logs.go:282] 0 containers: []
	W1212 20:08:38.308331  245478 logs.go:284] No container was found matching "etcd"
	I1212 20:08:38.308336  245478 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:08:38.308380  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:08:38.333938  245478 cri.go:89] found id: ""
	I1212 20:08:38.333960  245478 logs.go:282] 0 containers: []
	W1212 20:08:38.333970  245478 logs.go:284] No container was found matching "coredns"
	I1212 20:08:38.333978  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:08:38.334032  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:08:38.358789  245478 cri.go:89] found id: "02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:08:38.358809  245478 cri.go:89] found id: ""
	I1212 20:08:38.358820  245478 logs.go:282] 1 containers: [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849]
	I1212 20:08:38.358876  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:38.362770  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:08:38.362830  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:08:38.388459  245478 cri.go:89] found id: ""
	I1212 20:08:38.388484  245478 logs.go:282] 0 containers: []
	W1212 20:08:38.388493  245478 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:38.388498  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:08:38.388539  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:08:38.414949  245478 cri.go:89] found id: "4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:08:38.414970  245478 cri.go:89] found id: ""
	I1212 20:08:38.414978  245478 logs.go:282] 1 containers: [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5]
	I1212 20:08:38.415017  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:38.418784  245478 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:08:38.418840  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:08:38.445333  245478 cri.go:89] found id: ""
	I1212 20:08:38.445355  245478 logs.go:282] 0 containers: []
	W1212 20:08:38.445363  245478 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:38.445371  245478 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:08:38.445427  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:08:38.469911  245478 cri.go:89] found id: ""
	I1212 20:08:38.469934  245478 logs.go:282] 0 containers: []
	W1212 20:08:38.469950  245478 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:08:38.469961  245478 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:38.469972  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:38.523899  245478 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:38.523918  245478 logs.go:123] Gathering logs for kube-apiserver [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998] ...
	I1212 20:08:38.523931  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:08:38.553594  245478 logs.go:123] Gathering logs for kube-scheduler [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849] ...
	I1212 20:08:38.553621  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:08:38.579102  245478 logs.go:123] Gathering logs for kube-controller-manager [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5] ...
	I1212 20:08:38.579124  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:08:38.606622  245478 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:08:38.606656  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:08:38.668913  245478 logs.go:123] Gathering logs for container status ...
	I1212 20:08:38.668946  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:38.700862  245478 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:38.700887  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:38.800352  245478 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:38.800382  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:39.733070  260486 kubeadm.go:319] [apiclient] All control plane components are healthy after 5.002751 seconds
	I1212 20:08:39.733209  260486 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 20:08:39.747288  260486 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 20:08:40.267995  260486 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 20:08:40.268329  260486 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-824670 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 20:08:40.778654  260486 kubeadm.go:319] [bootstrap-token] Using token: 0rx6pa.vzh88q7v9ne7n54f
	I1212 20:08:36.123521  265161 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1212 20:08:36.123557  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (25788928 bytes)
	I1212 20:08:36.123675  265161 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1212 20:08:36.123676  265161 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1212 20:08:36.123750  265161 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1212 20:08:36.123717  265161 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1212 20:08:36.123811  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1212 20:08:36.124050  265161 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1212 20:08:36.124964  265161 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1212 20:08:36.124988  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (23131648 bytes)
	I1212 20:08:36.146201  265161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:08:36.170995  265161 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1212 20:08:36.171003  265161 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0
	I1212 20:08:36.171016  265161 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1212 20:08:36.171026  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (17239040 bytes)
	I1212 20:08:36.171037  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (27682304 bytes)
	I1212 20:08:36.171119  265161 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1212 20:08:36.294544  265161 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1212 20:08:36.294589  265161 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:08:36.294636  265161 ssh_runner.go:195] Run: which crictl
	I1212 20:08:36.294698  265161 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1212 20:08:36.294713  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1212 20:08:36.368417  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:08:36.440783  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:08:36.513083  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:08:36.542727  265161 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1212 20:08:36.542788  265161 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1212 20:08:36.588857  265161 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1212 20:08:36.588964  265161 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1212 20:08:36.995783  265161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1212 20:08:38.125793  265161 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.582980729s)
	I1212 20:08:38.125824  265161 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1212 20:08:38.125835  265161 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.536853615s)
	I1212 20:08:38.125845  265161 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1212 20:08:38.125861  265161 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1212 20:08:38.125880  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1212 20:08:38.125890  265161 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1212 20:08:38.125885  265161 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1: (1.130062386s)
	I1212 20:08:38.125951  265161 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1212 20:08:38.125981  265161 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1212 20:08:38.126011  265161 ssh_runner.go:195] Run: which crictl
	I1212 20:08:39.419373  265161 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.293450663s)
	I1212 20:08:39.419406  265161 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1212 20:08:39.419423  265161 ssh_runner.go:235] Completed: which crictl: (1.293391671s)
	I1212 20:08:39.419431  265161 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1212 20:08:39.419486  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1212 20:08:39.419530  265161 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1212 20:08:40.626387  265161 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.206825785s)
	I1212 20:08:40.626419  265161 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1212 20:08:40.626439  265161 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1: (1.206940412s)
	I1212 20:08:40.626442  265161 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1212 20:08:40.626487  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1212 20:08:40.626489  265161 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1212 20:08:40.780171  260486 out.go:252]   - Configuring RBAC rules ...
	I1212 20:08:40.780324  260486 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 20:08:40.785141  260486 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 20:08:40.790671  260486 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 20:08:40.793340  260486 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 20:08:40.795902  260486 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 20:08:40.798592  260486 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 20:08:40.807781  260486 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 20:08:41.000066  260486 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1212 20:08:41.189309  260486 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1212 20:08:41.190027  260486 kubeadm.go:319] 
	I1212 20:08:41.190137  260486 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1212 20:08:41.190156  260486 kubeadm.go:319] 
	I1212 20:08:41.190256  260486 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1212 20:08:41.190268  260486 kubeadm.go:319] 
	I1212 20:08:41.190322  260486 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1212 20:08:41.190415  260486 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 20:08:41.190492  260486 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 20:08:41.190505  260486 kubeadm.go:319] 
	I1212 20:08:41.190591  260486 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1212 20:08:41.190600  260486 kubeadm.go:319] 
	I1212 20:08:41.190678  260486 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 20:08:41.190694  260486 kubeadm.go:319] 
	I1212 20:08:41.190764  260486 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1212 20:08:41.190879  260486 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 20:08:41.190990  260486 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 20:08:41.191006  260486 kubeadm.go:319] 
	I1212 20:08:41.191123  260486 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 20:08:41.191229  260486 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1212 20:08:41.191239  260486 kubeadm.go:319] 
	I1212 20:08:41.191381  260486 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 0rx6pa.vzh88q7v9ne7n54f \
	I1212 20:08:41.191525  260486 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c49fb95e06ac971f3e9f3fbce39707c55b5d19bac31f7f0c0750449d5e9f38c \
	I1212 20:08:41.191584  260486 kubeadm.go:319] 	--control-plane 
	I1212 20:08:41.191596  260486 kubeadm.go:319] 
	I1212 20:08:41.191705  260486 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1212 20:08:41.191722  260486 kubeadm.go:319] 
	I1212 20:08:41.191855  260486 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 0rx6pa.vzh88q7v9ne7n54f \
	I1212 20:08:41.192005  260486 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c49fb95e06ac971f3e9f3fbce39707c55b5d19bac31f7f0c0750449d5e9f38c 
	I1212 20:08:41.194559  260486 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1212 20:08:41.194738  260486 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 20:08:41.194765  260486 cni.go:84] Creating CNI manager for ""
	I1212 20:08:41.194777  260486 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:08:41.197043  260486 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1212 20:08:39.639209  244825 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 20:08:39.639609  244825 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 20:08:39.639659  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:08:39.639704  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:08:39.673665  244825 cri.go:89] found id: "ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:08:39.673684  244825 cri.go:89] found id: ""
	I1212 20:08:39.673692  244825 logs.go:282] 1 containers: [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c]
	I1212 20:08:39.673740  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:39.677310  244825 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:08:39.677373  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:08:39.710524  244825 cri.go:89] found id: ""
	I1212 20:08:39.710549  244825 logs.go:282] 0 containers: []
	W1212 20:08:39.710560  244825 logs.go:284] No container was found matching "etcd"
	I1212 20:08:39.710568  244825 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:08:39.710619  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:08:39.744786  244825 cri.go:89] found id: ""
	I1212 20:08:39.744811  244825 logs.go:282] 0 containers: []
	W1212 20:08:39.744822  244825 logs.go:284] No container was found matching "coredns"
	I1212 20:08:39.744830  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:08:39.744884  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:08:39.781015  244825 cri.go:89] found id: "8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:08:39.781037  244825 cri.go:89] found id: ""
	I1212 20:08:39.781046  244825 logs.go:282] 1 containers: [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08]
	I1212 20:08:39.781109  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:39.784783  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:08:39.784845  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:08:39.819095  244825 cri.go:89] found id: ""
	I1212 20:08:39.819117  244825 logs.go:282] 0 containers: []
	W1212 20:08:39.819131  244825 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:39.819139  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:08:39.819190  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:08:39.860027  244825 cri.go:89] found id: "509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:08:39.860048  244825 cri.go:89] found id: ""
	I1212 20:08:39.860058  244825 logs.go:282] 1 containers: [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56]
	I1212 20:08:39.860119  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:39.864652  244825 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:08:39.864719  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:08:39.904929  244825 cri.go:89] found id: ""
	I1212 20:08:39.904955  244825 logs.go:282] 0 containers: []
	W1212 20:08:39.904966  244825 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:39.904974  244825 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:08:39.905029  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:08:39.941704  244825 cri.go:89] found id: ""
	I1212 20:08:39.941728  244825 logs.go:282] 0 containers: []
	W1212 20:08:39.941742  244825 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:08:39.941752  244825 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:39.941767  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:39.959527  244825 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:39.959563  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:40.018952  244825 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:40.018969  244825 logs.go:123] Gathering logs for kube-apiserver [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c] ...
	I1212 20:08:40.018983  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:08:40.058375  244825 logs.go:123] Gathering logs for kube-scheduler [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08] ...
	I1212 20:08:40.058407  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:08:40.128094  244825 logs.go:123] Gathering logs for kube-controller-manager [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56] ...
	I1212 20:08:40.128121  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:08:40.163906  244825 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:08:40.163930  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:08:40.225409  244825 logs.go:123] Gathering logs for container status ...
	I1212 20:08:40.225443  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:40.271236  244825 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:40.271264  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:42.870874  244825 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 20:08:42.871257  244825 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 20:08:42.871335  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:08:42.871382  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:08:42.906933  244825 cri.go:89] found id: "ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:08:42.906955  244825 cri.go:89] found id: ""
	I1212 20:08:42.906964  244825 logs.go:282] 1 containers: [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c]
	I1212 20:08:42.907022  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:42.910857  244825 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:08:42.910919  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:08:42.944269  244825 cri.go:89] found id: ""
	I1212 20:08:42.944314  244825 logs.go:282] 0 containers: []
	W1212 20:08:42.944324  244825 logs.go:284] No container was found matching "etcd"
	I1212 20:08:42.944331  244825 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:08:42.944391  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:08:42.979080  244825 cri.go:89] found id: ""
	I1212 20:08:42.979107  244825 logs.go:282] 0 containers: []
	W1212 20:08:42.979116  244825 logs.go:284] No container was found matching "coredns"
	I1212 20:08:42.979123  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:08:42.979173  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:08:43.014534  244825 cri.go:89] found id: "8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:08:43.014555  244825 cri.go:89] found id: ""
	I1212 20:08:43.014564  244825 logs.go:282] 1 containers: [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08]
	I1212 20:08:43.014607  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:43.018363  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:08:43.018423  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:08:43.054456  244825 cri.go:89] found id: ""
	I1212 20:08:43.054483  244825 logs.go:282] 0 containers: []
	W1212 20:08:43.054494  244825 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:43.054502  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:08:43.054564  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:08:41.198353  260486 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 20:08:41.203231  260486 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1212 20:08:41.203249  260486 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1212 20:08:41.217767  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 20:08:42.013338  260486 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 20:08:42.013435  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:42.013452  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-824670 minikube.k8s.io/updated_at=2025_12_12T20_08_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300 minikube.k8s.io/name=old-k8s-version-824670 minikube.k8s.io/primary=true
	I1212 20:08:42.022701  260486 ops.go:34] apiserver oom_adj: -16
	I1212 20:08:42.089669  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:42.590462  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:43.090516  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:43.590299  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:41.318904  245478 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 20:08:41.319393  245478 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1212 20:08:41.319446  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:08:41.319492  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:08:41.371691  245478 cri.go:89] found id: "ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:08:41.371712  245478 cri.go:89] found id: ""
	I1212 20:08:41.371722  245478 logs.go:282] 1 containers: [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998]
	I1212 20:08:41.371782  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:41.376689  245478 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:08:41.376754  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:08:41.407123  245478 cri.go:89] found id: ""
	I1212 20:08:41.407154  245478 logs.go:282] 0 containers: []
	W1212 20:08:41.407165  245478 logs.go:284] No container was found matching "etcd"
	I1212 20:08:41.407173  245478 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:08:41.407237  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:08:41.438656  245478 cri.go:89] found id: ""
	I1212 20:08:41.438682  245478 logs.go:282] 0 containers: []
	W1212 20:08:41.438693  245478 logs.go:284] No container was found matching "coredns"
	I1212 20:08:41.438701  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:08:41.438753  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:08:41.471830  245478 cri.go:89] found id: "02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:08:41.471851  245478 cri.go:89] found id: ""
	I1212 20:08:41.471861  245478 logs.go:282] 1 containers: [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849]
	I1212 20:08:41.471917  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:41.477076  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:08:41.477136  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:08:41.510968  245478 cri.go:89] found id: ""
	I1212 20:08:41.510995  245478 logs.go:282] 0 containers: []
	W1212 20:08:41.511006  245478 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:41.511014  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:08:41.511073  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:08:41.545107  245478 cri.go:89] found id: "4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:08:41.545132  245478 cri.go:89] found id: ""
	I1212 20:08:41.545235  245478 logs.go:282] 1 containers: [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5]
	I1212 20:08:41.545321  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:41.550045  245478 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:08:41.550115  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:08:41.580785  245478 cri.go:89] found id: ""
	I1212 20:08:41.580815  245478 logs.go:282] 0 containers: []
	W1212 20:08:41.580827  245478 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:41.580834  245478 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:08:41.580892  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:08:41.614524  245478 cri.go:89] found id: ""
	I1212 20:08:41.614551  245478 logs.go:282] 0 containers: []
	W1212 20:08:41.614562  245478 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:08:41.614573  245478 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:41.614589  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:41.633086  245478 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:41.633115  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:41.707740  245478 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:41.707761  245478 logs.go:123] Gathering logs for kube-apiserver [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998] ...
	I1212 20:08:41.707782  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:08:41.745479  245478 logs.go:123] Gathering logs for kube-scheduler [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849] ...
	I1212 20:08:41.745512  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:08:41.779160  245478 logs.go:123] Gathering logs for kube-controller-manager [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5] ...
	I1212 20:08:41.779189  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:08:41.808978  245478 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:08:41.809012  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:08:41.876306  245478 logs.go:123] Gathering logs for container status ...
	I1212 20:08:41.876344  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:41.911771  245478 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:41.911802  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:41.829031  265161 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.202450399s)
	I1212 20:08:41.829087  265161 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1212 20:08:41.829085  265161 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1: (1.202568652s)
	I1212 20:08:41.829133  265161 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1212 20:08:41.829151  265161 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1212 20:08:41.829192  265161 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0
	I1212 20:08:43.394225  265161 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0: (1.565010892s)
	I1212 20:08:43.394260  265161 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1212 20:08:43.394303  265161 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1212 20:08:43.394304  265161 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1: (1.565114428s)
	I1212 20:08:43.394347  265161 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1212 20:08:43.394349  265161 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1212 20:08:43.394474  265161 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1212 20:08:44.756482  265161 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.362109676s)
	I1212 20:08:44.756515  265161 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1212 20:08:44.756527  265161 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: (1.362036501s)
	I1212 20:08:44.756535  265161 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1212 20:08:44.756551  265161 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1212 20:08:44.756576  265161 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1212 20:08:44.756573  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1212 20:08:45.296360  265161 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1212 20:08:45.296396  265161 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1212 20:08:45.296436  265161 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1212 20:08:45.404602  265161 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22112-5703/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1212 20:08:45.404643  265161 cache_images.go:125] Successfully loaded all cached images
	I1212 20:08:45.404651  265161 cache_images.go:94] duration metric: took 9.684163438s to LoadCachedImages
	I1212 20:08:45.404665  265161 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1212 20:08:45.404775  265161 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-753103 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-753103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 20:08:45.404867  265161 ssh_runner.go:195] Run: crio config
	I1212 20:08:45.446680  265161 cni.go:84] Creating CNI manager for ""
	I1212 20:08:45.446697  265161 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:08:45.446710  265161 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 20:08:45.446728  265161 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-753103 NodeName:no-preload-753103 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:08:45.446833  265161 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-753103"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:08:45.446894  265161 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 20:08:45.454665  265161 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1212 20:08:45.454709  265161 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 20:08:45.462825  265161 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256
	I1212 20:08:45.462889  265161 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/22112-5703/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet
	I1212 20:08:45.462910  265161 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1212 20:08:45.462970  265161 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/22112-5703/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm
	I1212 20:08:45.466674  265161 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1212 20:08:45.466700  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (58589368 bytes)
	I1212 20:08:43.092398  244825 cri.go:89] found id: "509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:08:43.092418  244825 cri.go:89] found id: ""
	I1212 20:08:43.092429  244825 logs.go:282] 1 containers: [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56]
	I1212 20:08:43.092488  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:43.096600  244825 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:08:43.096666  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:08:43.139162  244825 cri.go:89] found id: ""
	I1212 20:08:43.139186  244825 logs.go:282] 0 containers: []
	W1212 20:08:43.139197  244825 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:43.139206  244825 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:08:43.139264  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:08:43.179247  244825 cri.go:89] found id: ""
	I1212 20:08:43.179291  244825 logs.go:282] 0 containers: []
	W1212 20:08:43.179302  244825 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:08:43.179313  244825 logs.go:123] Gathering logs for kube-controller-manager [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56] ...
	I1212 20:08:43.179328  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:08:43.222396  244825 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:08:43.222425  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:08:43.280020  244825 logs.go:123] Gathering logs for container status ...
	I1212 20:08:43.280059  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:43.319896  244825 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:43.319930  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:43.426931  244825 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:43.426963  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:43.442691  244825 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:43.442715  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:43.502328  244825 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:43.502348  244825 logs.go:123] Gathering logs for kube-apiserver [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c] ...
	I1212 20:08:43.502366  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:08:43.540866  244825 logs.go:123] Gathering logs for kube-scheduler [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08] ...
	I1212 20:08:43.540900  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:08:46.111532  244825 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 20:08:46.112064  244825 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 20:08:46.112123  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:08:46.112165  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:08:46.155917  244825 cri.go:89] found id: "ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:08:46.155941  244825 cri.go:89] found id: ""
	I1212 20:08:46.155951  244825 logs.go:282] 1 containers: [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c]
	I1212 20:08:46.156002  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:46.160472  244825 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:08:46.160544  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:08:46.197687  244825 cri.go:89] found id: ""
	I1212 20:08:46.197716  244825 logs.go:282] 0 containers: []
	W1212 20:08:46.197727  244825 logs.go:284] No container was found matching "etcd"
	I1212 20:08:46.197735  244825 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:08:46.197786  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:08:46.238516  244825 cri.go:89] found id: ""
	I1212 20:08:46.238542  244825 logs.go:282] 0 containers: []
	W1212 20:08:46.238552  244825 logs.go:284] No container was found matching "coredns"
	I1212 20:08:46.238560  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:08:46.238609  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:08:46.273243  244825 cri.go:89] found id: "8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:08:46.273288  244825 cri.go:89] found id: ""
	I1212 20:08:46.273301  244825 logs.go:282] 1 containers: [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08]
	I1212 20:08:46.273347  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:46.277248  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:08:46.277342  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:08:46.309983  244825 cri.go:89] found id: ""
	I1212 20:08:46.310005  244825 logs.go:282] 0 containers: []
	W1212 20:08:46.310015  244825 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:46.310023  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:08:46.310070  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:08:46.346164  244825 cri.go:89] found id: "509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:08:46.346184  244825 cri.go:89] found id: ""
	I1212 20:08:46.346194  244825 logs.go:282] 1 containers: [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56]
	I1212 20:08:46.346247  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:46.349786  244825 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:08:46.349844  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:08:46.387217  244825 cri.go:89] found id: ""
	I1212 20:08:46.387246  244825 logs.go:282] 0 containers: []
	W1212 20:08:46.387308  244825 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:46.387319  244825 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:08:46.387380  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:08:46.433829  244825 cri.go:89] found id: ""
	I1212 20:08:46.433856  244825 logs.go:282] 0 containers: []
	W1212 20:08:46.433867  244825 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:08:46.433878  244825 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:46.433900  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:46.550589  244825 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:46.550625  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:46.572398  244825 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:46.572430  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:46.657022  244825 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:46.657046  244825 logs.go:123] Gathering logs for kube-apiserver [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c] ...
	I1212 20:08:46.657062  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:08:46.699346  244825 logs.go:123] Gathering logs for kube-scheduler [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08] ...
	I1212 20:08:46.699374  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:08:46.766841  244825 logs.go:123] Gathering logs for kube-controller-manager [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56] ...
	I1212 20:08:46.766867  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:08:46.800364  244825 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:08:46.800389  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:08:46.846664  244825 logs.go:123] Gathering logs for container status ...
	I1212 20:08:46.846693  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:46.322703  265161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:08:46.336772  265161 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1212 20:08:46.341021  265161 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1212 20:08:46.341051  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (58106148 bytes)
	I1212 20:08:46.454807  265161 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1212 20:08:46.462493  265161 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1212 20:08:46.462535  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (72364216 bytes)
	I1212 20:08:46.658843  265161 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 20:08:46.667840  265161 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1212 20:08:46.682074  265161 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 20:08:46.876362  265161 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1212 20:08:46.890982  265161 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1212 20:08:46.894934  265161 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:08:46.952701  265161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:08:47.027327  265161 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:08:47.054531  265161 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103 for IP: 192.168.85.2
	I1212 20:08:47.054547  265161 certs.go:195] generating shared ca certs ...
	I1212 20:08:47.054561  265161 certs.go:227] acquiring lock for ca certs: {Name:mk2b2b547b64ae18c808f2dedd3d3cb8fa4a59ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:08:47.054731  265161 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-5703/.minikube/ca.key
	I1212 20:08:47.054806  265161 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.key
	I1212 20:08:47.054822  265161 certs.go:257] generating profile certs ...
	I1212 20:08:47.054902  265161 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/client.key
	I1212 20:08:47.054918  265161 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/client.crt with IP's: []
	I1212 20:08:47.083072  265161 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/client.crt ...
	I1212 20:08:47.083099  265161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/client.crt: {Name:mk8ea38dfc959f9ecc1890a3049161ef20ba2f29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:08:47.083268  265161 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/client.key ...
	I1212 20:08:47.083298  265161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/client.key: {Name:mk8ad0f4ecdf0768646879e602d2f79e3b039e63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:08:47.083412  265161 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/apiserver.key.0be4f421
	I1212 20:08:47.083431  265161 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/apiserver.crt.0be4f421 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1212 20:08:47.155746  265161 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/apiserver.crt.0be4f421 ...
	I1212 20:08:47.155780  265161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/apiserver.crt.0be4f421: {Name:mk743ea3a5dfd6f0d3aa9df8c263651cb1815cd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:08:47.155962  265161 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/apiserver.key.0be4f421 ...
	I1212 20:08:47.155982  265161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/apiserver.key.0be4f421: {Name:mk29a2308cae746f0665e9ca087baeb7914e10fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:08:47.156088  265161 certs.go:382] copying /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/apiserver.crt.0be4f421 -> /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/apiserver.crt
	I1212 20:08:47.156181  265161 certs.go:386] copying /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/apiserver.key.0be4f421 -> /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/apiserver.key
	I1212 20:08:47.156261  265161 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/proxy-client.key
	I1212 20:08:47.156300  265161 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/proxy-client.crt with IP's: []
	I1212 20:08:47.365313  265161 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/proxy-client.crt ...
	I1212 20:08:47.365339  265161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/proxy-client.crt: {Name:mkb7989fbda8dd318405e0f57c3a111f48b20a20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:08:47.365533  265161 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/proxy-client.key ...
	I1212 20:08:47.365552  265161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/proxy-client.key: {Name:mk2291c16bfb0af1a719821b1fc28e69e1427237 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:08:47.365794  265161 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/9254.pem (1338 bytes)
	W1212 20:08:47.365851  265161 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-5703/.minikube/certs/9254_empty.pem, impossibly tiny 0 bytes
	I1212 20:08:47.365874  265161 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 20:08:47.365915  265161 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem (1078 bytes)
	I1212 20:08:47.365954  265161 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:08:47.365987  265161 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/key.pem (1679 bytes)
	I1212 20:08:47.366043  265161 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/ssl/certs/92542.pem (1708 bytes)
	I1212 20:08:47.366755  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:08:47.385519  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 20:08:47.402552  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:08:47.418983  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 20:08:47.435569  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 20:08:47.452024  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 20:08:47.468004  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:08:47.484684  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 20:08:47.501341  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/ssl/certs/92542.pem --> /usr/share/ca-certificates/92542.pem (1708 bytes)
	I1212 20:08:47.520406  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:08:47.536939  265161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/certs/9254.pem --> /usr/share/ca-certificates/9254.pem (1338 bytes)
	I1212 20:08:47.553107  265161 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 20:08:47.564637  265161 ssh_runner.go:195] Run: openssl version
	I1212 20:08:47.570372  265161 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9254.pem
	I1212 20:08:47.577183  265161 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9254.pem /etc/ssl/certs/9254.pem
	I1212 20:08:47.584307  265161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9254.pem
	I1212 20:08:47.587941  265161 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:38 /usr/share/ca-certificates/9254.pem
	I1212 20:08:47.587990  265161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9254.pem
	I1212 20:08:47.633169  265161 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 20:08:47.641181  265161 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9254.pem /etc/ssl/certs/51391683.0
	I1212 20:08:47.650704  265161 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/92542.pem
	I1212 20:08:47.658818  265161 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/92542.pem /etc/ssl/certs/92542.pem
	I1212 20:08:47.666719  265161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92542.pem
	I1212 20:08:47.670595  265161 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:38 /usr/share/ca-certificates/92542.pem
	I1212 20:08:47.670650  265161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92542.pem
	I1212 20:08:47.712583  265161 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 20:08:47.719823  265161 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/92542.pem /etc/ssl/certs/3ec20f2e.0
	I1212 20:08:47.727948  265161 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:08:47.735756  265161 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 20:08:47.743568  265161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:08:47.747384  265161 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:08:47.747431  265161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:08:47.793199  265161 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 20:08:47.801052  265161 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 20:08:47.808336  265161 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:08:47.812407  265161 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 20:08:47.812464  265161 kubeadm.go:401] StartCluster: {Name:no-preload-753103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-753103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:08:47.812537  265161 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:08:47.812585  265161 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:08:47.840282  265161 cri.go:89] found id: ""
	I1212 20:08:47.840347  265161 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 20:08:47.849211  265161 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 20:08:47.857022  265161 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 20:08:47.857093  265161 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:08:47.864606  265161 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 20:08:47.864625  265161 kubeadm.go:158] found existing configuration files:
	
	I1212 20:08:47.864677  265161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 20:08:47.872767  265161 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 20:08:47.872822  265161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 20:08:47.880081  265161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 20:08:47.887357  265161 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 20:08:47.887403  265161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:08:47.894342  265161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 20:08:47.901820  265161 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 20:08:47.901870  265161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:08:47.909076  265161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 20:08:47.916480  265161 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 20:08:47.916514  265161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:08:47.923681  265161 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 20:08:47.957036  265161 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 20:08:47.957119  265161 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 20:08:48.021197  265161 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 20:08:48.021309  265161 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1212 20:08:48.021359  265161 kubeadm.go:319] OS: Linux
	I1212 20:08:48.021444  265161 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 20:08:48.021532  265161 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 20:08:48.021625  265161 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 20:08:48.021719  265161 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 20:08:48.021800  265161 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 20:08:48.021870  265161 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 20:08:48.021941  265161 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 20:08:48.022002  265161 kubeadm.go:319] CGROUPS_IO: enabled
	I1212 20:08:48.084129  265161 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 20:08:48.084319  265161 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 20:08:48.084457  265161 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 20:08:48.100204  265161 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 20:08:44.090376  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:44.590476  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:45.090121  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:45.590003  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:46.089754  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:46.590598  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:47.090522  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:47.590422  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:48.090489  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:48.590162  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:48.101951  265161 out.go:252]   - Generating certificates and keys ...
	I1212 20:08:48.102051  265161 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 20:08:48.102147  265161 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 20:08:48.144031  265161 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 20:08:48.221185  265161 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 20:08:48.349128  265161 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 20:08:48.418221  265161 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 20:08:48.451995  265161 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 20:08:48.452184  265161 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-753103] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1212 20:08:48.497385  265161 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 20:08:48.497527  265161 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-753103] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1212 20:08:48.570531  265161 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 20:08:48.688867  265161 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 20:08:48.774170  265161 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 20:08:48.774230  265161 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 20:08:48.787187  265161 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 20:08:48.802079  265161 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 20:08:48.863835  265161 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 20:08:48.947843  265161 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 20:08:49.110750  265161 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 20:08:49.111507  265161 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 20:08:49.117250  265161 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 20:08:44.498353  245478 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 20:08:44.498830  245478 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1212 20:08:44.498891  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:08:44.498952  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:08:44.527059  245478 cri.go:89] found id: "ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:08:44.527080  245478 cri.go:89] found id: ""
	I1212 20:08:44.527090  245478 logs.go:282] 1 containers: [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998]
	I1212 20:08:44.527140  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:44.530986  245478 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:08:44.531051  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:08:44.559144  245478 cri.go:89] found id: ""
	I1212 20:08:44.559171  245478 logs.go:282] 0 containers: []
	W1212 20:08:44.559182  245478 logs.go:284] No container was found matching "etcd"
	I1212 20:08:44.559189  245478 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:08:44.559247  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:08:44.590049  245478 cri.go:89] found id: ""
	I1212 20:08:44.590084  245478 logs.go:282] 0 containers: []
	W1212 20:08:44.590095  245478 logs.go:284] No container was found matching "coredns"
	I1212 20:08:44.590104  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:08:44.590160  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:08:44.622244  245478 cri.go:89] found id: "02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:08:44.622263  245478 cri.go:89] found id: ""
	I1212 20:08:44.622282  245478 logs.go:282] 1 containers: [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849]
	I1212 20:08:44.622339  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:44.627557  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:08:44.627626  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:08:44.660195  245478 cri.go:89] found id: ""
	I1212 20:08:44.660224  245478 logs.go:282] 0 containers: []
	W1212 20:08:44.660235  245478 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:44.660242  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:08:44.660319  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:08:44.693117  245478 cri.go:89] found id: "4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:08:44.693139  245478 cri.go:89] found id: ""
	I1212 20:08:44.693150  245478 logs.go:282] 1 containers: [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5]
	I1212 20:08:44.693210  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:44.697292  245478 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:08:44.697351  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:08:44.723617  245478 cri.go:89] found id: ""
	I1212 20:08:44.723643  245478 logs.go:282] 0 containers: []
	W1212 20:08:44.723655  245478 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:44.723663  245478 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:08:44.723706  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:08:44.749049  245478 cri.go:89] found id: ""
	I1212 20:08:44.749072  245478 logs.go:282] 0 containers: []
	W1212 20:08:44.749082  245478 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:08:44.749093  245478 logs.go:123] Gathering logs for kube-scheduler [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849] ...
	I1212 20:08:44.749108  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:08:44.777101  245478 logs.go:123] Gathering logs for kube-controller-manager [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5] ...
	I1212 20:08:44.777131  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:08:44.806670  245478 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:08:44.806702  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:08:44.858042  245478 logs.go:123] Gathering logs for container status ...
	I1212 20:08:44.858078  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:44.887913  245478 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:44.887936  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:44.972043  245478 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:44.972078  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:44.987485  245478 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:44.987514  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:45.050138  245478 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:45.050163  245478 logs.go:123] Gathering logs for kube-apiserver [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998] ...
	I1212 20:08:45.050177  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:08:47.584342  245478 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 20:08:47.584685  245478 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1212 20:08:47.584743  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:08:47.584789  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:08:47.616424  245478 cri.go:89] found id: "ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:08:47.616450  245478 cri.go:89] found id: ""
	I1212 20:08:47.616461  245478 logs.go:282] 1 containers: [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998]
	I1212 20:08:47.616520  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:47.620238  245478 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:08:47.620324  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:08:47.649702  245478 cri.go:89] found id: ""
	I1212 20:08:47.649726  245478 logs.go:282] 0 containers: []
	W1212 20:08:47.649735  245478 logs.go:284] No container was found matching "etcd"
	I1212 20:08:47.649742  245478 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:08:47.649794  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:08:47.676235  245478 cri.go:89] found id: ""
	I1212 20:08:47.676260  245478 logs.go:282] 0 containers: []
	W1212 20:08:47.676298  245478 logs.go:284] No container was found matching "coredns"
	I1212 20:08:47.676312  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:08:47.676359  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:08:47.701927  245478 cri.go:89] found id: "02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:08:47.701948  245478 cri.go:89] found id: ""
	I1212 20:08:47.701956  245478 logs.go:282] 1 containers: [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849]
	I1212 20:08:47.701998  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:47.705734  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:08:47.705799  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:08:47.731929  245478 cri.go:89] found id: ""
	I1212 20:08:47.731951  245478 logs.go:282] 0 containers: []
	W1212 20:08:47.731960  245478 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:47.731967  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:08:47.732014  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:08:47.758125  245478 cri.go:89] found id: "4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:08:47.758145  245478 cri.go:89] found id: ""
	I1212 20:08:47.758154  245478 logs.go:282] 1 containers: [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5]
	I1212 20:08:47.758223  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:47.761929  245478 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:08:47.761984  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:08:47.790519  245478 cri.go:89] found id: ""
	I1212 20:08:47.790543  245478 logs.go:282] 0 containers: []
	W1212 20:08:47.790553  245478 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:47.790560  245478 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:08:47.790612  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:08:47.819270  245478 cri.go:89] found id: ""
	I1212 20:08:47.819317  245478 logs.go:282] 0 containers: []
	W1212 20:08:47.819325  245478 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:08:47.819334  245478 logs.go:123] Gathering logs for container status ...
	I1212 20:08:47.819347  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:47.850165  245478 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:47.850195  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:47.935473  245478 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:47.935498  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:47.949744  245478 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:47.949770  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:48.009754  245478 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:48.009776  245478 logs.go:123] Gathering logs for kube-apiserver [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998] ...
	I1212 20:08:48.009798  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:08:48.044549  245478 logs.go:123] Gathering logs for kube-scheduler [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849] ...
	I1212 20:08:48.044583  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:08:48.073338  245478 logs.go:123] Gathering logs for kube-controller-manager [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5] ...
	I1212 20:08:48.073371  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:08:48.102944  245478 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:08:48.102973  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:08:49.120379  265161 out.go:252]   - Booting up control plane ...
	I1212 20:08:49.120511  265161 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 20:08:49.120611  265161 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 20:08:49.120695  265161 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 20:08:49.132948  265161 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:08:49.133115  265161 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 20:08:49.140292  265161 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 20:08:49.140508  265161 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:08:49.140573  265161 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 20:08:49.239455  265161 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 20:08:49.239583  265161 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 20:08:49.741164  265161 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.848408ms
	I1212 20:08:49.745735  265161 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1212 20:08:49.745892  265161 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1212 20:08:49.746036  265161 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1212 20:08:49.746149  265161 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1212 20:08:50.751726  265161 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.005922388s
	I1212 20:08:49.385501  244825 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 20:08:49.385911  244825 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 20:08:49.385966  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:08:49.386025  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:08:49.421491  244825 cri.go:89] found id: "ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:08:49.421514  244825 cri.go:89] found id: ""
	I1212 20:08:49.421523  244825 logs.go:282] 1 containers: [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c]
	I1212 20:08:49.421577  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:49.425133  244825 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:08:49.425192  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:08:49.458069  244825 cri.go:89] found id: ""
	I1212 20:08:49.458092  244825 logs.go:282] 0 containers: []
	W1212 20:08:49.458099  244825 logs.go:284] No container was found matching "etcd"
	I1212 20:08:49.458104  244825 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:08:49.458146  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:08:49.492481  244825 cri.go:89] found id: ""
	I1212 20:08:49.492507  244825 logs.go:282] 0 containers: []
	W1212 20:08:49.492517  244825 logs.go:284] No container was found matching "coredns"
	I1212 20:08:49.492525  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:08:49.492577  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:08:49.534555  244825 cri.go:89] found id: "8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:08:49.534578  244825 cri.go:89] found id: ""
	I1212 20:08:49.534588  244825 logs.go:282] 1 containers: [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08]
	I1212 20:08:49.534638  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:49.538211  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:08:49.538268  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:08:49.570252  244825 cri.go:89] found id: ""
	I1212 20:08:49.570296  244825 logs.go:282] 0 containers: []
	W1212 20:08:49.570307  244825 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:49.570315  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:08:49.570354  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:08:49.603802  244825 cri.go:89] found id: "509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:08:49.603831  244825 cri.go:89] found id: ""
	I1212 20:08:49.603842  244825 logs.go:282] 1 containers: [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56]
	I1212 20:08:49.603897  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:49.607634  244825 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:08:49.607692  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:08:49.642107  244825 cri.go:89] found id: ""
	I1212 20:08:49.642134  244825 logs.go:282] 0 containers: []
	W1212 20:08:49.642145  244825 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:49.642153  244825 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:08:49.642206  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:08:49.677551  244825 cri.go:89] found id: ""
	I1212 20:08:49.677571  244825 logs.go:282] 0 containers: []
	W1212 20:08:49.677578  244825 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:08:49.677587  244825 logs.go:123] Gathering logs for container status ...
	I1212 20:08:49.677603  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:49.713703  244825 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:49.713726  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:49.801630  244825 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:49.801652  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:49.816335  244825 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:49.816356  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:49.872785  244825 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:49.872807  244825 logs.go:123] Gathering logs for kube-apiserver [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c] ...
	I1212 20:08:49.872818  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:08:49.908696  244825 logs.go:123] Gathering logs for kube-scheduler [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08] ...
	I1212 20:08:49.908722  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:08:49.982729  244825 logs.go:123] Gathering logs for kube-controller-manager [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56] ...
	I1212 20:08:49.982760  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:08:50.026341  244825 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:08:50.026385  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:08:52.592829  244825 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 20:08:52.593240  244825 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 20:08:52.593311  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:08:52.593363  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:08:52.629141  244825 cri.go:89] found id: "ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:08:52.629160  244825 cri.go:89] found id: ""
	I1212 20:08:52.629168  244825 logs.go:282] 1 containers: [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c]
	I1212 20:08:52.629213  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:52.633496  244825 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:08:52.633559  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:08:52.672394  244825 cri.go:89] found id: ""
	I1212 20:08:52.672420  244825 logs.go:282] 0 containers: []
	W1212 20:08:52.672432  244825 logs.go:284] No container was found matching "etcd"
	I1212 20:08:52.672440  244825 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:08:52.672489  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:08:52.706611  244825 cri.go:89] found id: ""
	I1212 20:08:52.706631  244825 logs.go:282] 0 containers: []
	W1212 20:08:52.706638  244825 logs.go:284] No container was found matching "coredns"
	I1212 20:08:52.706645  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:08:52.706697  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:08:52.741760  244825 cri.go:89] found id: "8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:08:52.741779  244825 cri.go:89] found id: ""
	I1212 20:08:52.741797  244825 logs.go:282] 1 containers: [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08]
	I1212 20:08:52.741843  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:52.745525  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:08:52.745582  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:08:52.779744  244825 cri.go:89] found id: ""
	I1212 20:08:52.779764  244825 logs.go:282] 0 containers: []
	W1212 20:08:52.779772  244825 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:52.779778  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:08:52.779830  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:08:52.814544  244825 cri.go:89] found id: "509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:08:52.814567  244825 cri.go:89] found id: ""
	I1212 20:08:52.814577  244825 logs.go:282] 1 containers: [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56]
	I1212 20:08:52.814635  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:52.818422  244825 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:08:52.818483  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:08:52.852517  244825 cri.go:89] found id: ""
	I1212 20:08:52.852542  244825 logs.go:282] 0 containers: []
	W1212 20:08:52.852552  244825 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:52.852560  244825 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:08:52.852627  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:08:52.889641  244825 cri.go:89] found id: ""
	I1212 20:08:52.889667  244825 logs.go:282] 0 containers: []
	W1212 20:08:52.889679  244825 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:08:52.889690  244825 logs.go:123] Gathering logs for container status ...
	I1212 20:08:52.889705  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:52.928168  244825 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:52.928195  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:53.025654  244825 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:53.025682  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:53.040993  244825 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:53.041016  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 20:08:49.090670  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:49.590513  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:50.090507  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:50.590494  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:51.090480  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:51.589774  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:52.090437  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:52.590330  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:53.090294  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:53.589908  260486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:53.663970  260486 kubeadm.go:1114] duration metric: took 11.650606291s to wait for elevateKubeSystemPrivileges
	I1212 20:08:53.664010  260486 kubeadm.go:403] duration metric: took 21.803106362s to StartCluster
	I1212 20:08:53.664043  260486 settings.go:142] acquiring lock: {Name:mk7b47e673a4fe47e1d03b804f21c4b8c19bdcb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:08:53.664120  260486 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 20:08:53.665100  260486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/kubeconfig: {Name:mkf945a9079d724e4b1deff5cd122f8b2719dc1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:08:53.665353  260486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 20:08:53.665352  260486 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:08:53.665370  260486 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 20:08:53.665448  260486 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-824670"
	I1212 20:08:53.665562  260486 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-824670"
	I1212 20:08:53.665581  260486 config.go:182] Loaded profile config "old-k8s-version-824670": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1212 20:08:53.665459  260486 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-824670"
	I1212 20:08:53.665610  260486 host.go:66] Checking if "old-k8s-version-824670" exists ...
	I1212 20:08:53.665623  260486 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-824670"
	I1212 20:08:53.666080  260486 cli_runner.go:164] Run: docker container inspect old-k8s-version-824670 --format={{.State.Status}}
	I1212 20:08:53.666300  260486 cli_runner.go:164] Run: docker container inspect old-k8s-version-824670 --format={{.State.Status}}
	I1212 20:08:53.671336  260486 out.go:179] * Verifying Kubernetes components...
	I1212 20:08:53.672651  260486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:08:53.688751  260486 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:08:51.359902  265161 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.614053888s
	I1212 20:08:53.747155  265161 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001354922s
	I1212 20:08:53.767113  265161 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 20:08:53.779336  265161 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 20:08:53.791775  265161 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 20:08:53.792108  265161 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-753103 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 20:08:53.804380  265161 kubeadm.go:319] [bootstrap-token] Using token: ll5dd3.f0k4t0l7ykcnbls2
	I1212 20:08:53.690070  260486 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:08:53.690093  260486 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 20:08:53.690157  260486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-824670
	I1212 20:08:53.690890  260486 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-824670"
	I1212 20:08:53.690933  260486 host.go:66] Checking if "old-k8s-version-824670" exists ...
	I1212 20:08:53.691454  260486 cli_runner.go:164] Run: docker container inspect old-k8s-version-824670 --format={{.State.Status}}
	I1212 20:08:53.713972  260486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33059 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/old-k8s-version-824670/id_rsa Username:docker}
	I1212 20:08:53.715263  260486 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 20:08:53.715342  260486 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 20:08:53.715498  260486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-824670
	I1212 20:08:53.742805  260486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33059 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/old-k8s-version-824670/id_rsa Username:docker}
	I1212 20:08:53.773409  260486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 20:08:53.833678  260486 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:08:53.863237  260486 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:08:53.867547  260486 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:08:50.682339  245478 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 20:08:50.682741  245478 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1212 20:08:50.682802  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:08:50.682867  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:08:50.713608  245478 cri.go:89] found id: "ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:08:50.713629  245478 cri.go:89] found id: ""
	I1212 20:08:50.713639  245478 logs.go:282] 1 containers: [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998]
	I1212 20:08:50.713696  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:50.717785  245478 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:08:50.717852  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:08:50.748187  245478 cri.go:89] found id: ""
	I1212 20:08:50.748210  245478 logs.go:282] 0 containers: []
	W1212 20:08:50.748220  245478 logs.go:284] No container was found matching "etcd"
	I1212 20:08:50.748226  245478 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:08:50.748286  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:08:50.781753  245478 cri.go:89] found id: ""
	I1212 20:08:50.781866  245478 logs.go:282] 0 containers: []
	W1212 20:08:50.781886  245478 logs.go:284] No container was found matching "coredns"
	I1212 20:08:50.781896  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:08:50.781955  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:08:50.809701  245478 cri.go:89] found id: "02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:08:50.809718  245478 cri.go:89] found id: ""
	I1212 20:08:50.809725  245478 logs.go:282] 1 containers: [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849]
	I1212 20:08:50.809771  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:50.814048  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:08:50.814105  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:08:50.843358  245478 cri.go:89] found id: ""
	I1212 20:08:50.843382  245478 logs.go:282] 0 containers: []
	W1212 20:08:50.843392  245478 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:50.843399  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:08:50.843460  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:08:50.875400  245478 cri.go:89] found id: "4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:08:50.875424  245478 cri.go:89] found id: ""
	I1212 20:08:50.875435  245478 logs.go:282] 1 containers: [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5]
	I1212 20:08:50.875489  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:50.879408  245478 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:08:50.879465  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:08:50.911038  245478 cri.go:89] found id: ""
	I1212 20:08:50.911066  245478 logs.go:282] 0 containers: []
	W1212 20:08:50.911096  245478 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:50.911109  245478 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:08:50.911167  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:08:50.939706  245478 cri.go:89] found id: ""
	I1212 20:08:50.939730  245478 logs.go:282] 0 containers: []
	W1212 20:08:50.939742  245478 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:08:50.939755  245478 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:08:50.939769  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:08:51.010431  245478 logs.go:123] Gathering logs for container status ...
	I1212 20:08:51.010461  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:51.041775  245478 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:51.041804  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:51.150469  245478 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:51.150505  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:51.167403  245478 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:51.167431  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:51.246376  245478 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:51.246399  245478 logs.go:123] Gathering logs for kube-apiserver [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998] ...
	I1212 20:08:51.246430  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:08:51.284712  245478 logs.go:123] Gathering logs for kube-scheduler [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849] ...
	I1212 20:08:51.284738  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:08:51.319700  245478 logs.go:123] Gathering logs for kube-controller-manager [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5] ...
	I1212 20:08:51.319740  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:08:53.863176  245478 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 20:08:53.863599  245478 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1212 20:08:53.863649  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:08:53.863695  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:08:53.910698  245478 cri.go:89] found id: "ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:08:53.910726  245478 cri.go:89] found id: ""
	I1212 20:08:53.910736  245478 logs.go:282] 1 containers: [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998]
	I1212 20:08:53.910796  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:53.917626  245478 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:08:53.917697  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:08:53.954369  245478 cri.go:89] found id: ""
	I1212 20:08:53.954406  245478 logs.go:282] 0 containers: []
	W1212 20:08:53.954417  245478 logs.go:284] No container was found matching "etcd"
	I1212 20:08:53.954432  245478 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:08:53.954492  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:08:53.986679  245478 cri.go:89] found id: ""
	I1212 20:08:53.986707  245478 logs.go:282] 0 containers: []
	W1212 20:08:53.986718  245478 logs.go:284] No container was found matching "coredns"
	I1212 20:08:53.986745  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:08:53.986838  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:08:54.020608  245478 cri.go:89] found id: "02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:08:54.020637  245478 cri.go:89] found id: ""
	I1212 20:08:54.020649  245478 logs.go:282] 1 containers: [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849]
	I1212 20:08:54.020714  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:54.025625  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:08:54.025689  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:08:54.056667  245478 cri.go:89] found id: ""
	I1212 20:08:54.056693  245478 logs.go:282] 0 containers: []
	W1212 20:08:54.056703  245478 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:54.056711  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:08:54.056776  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:08:54.093334  245478 cri.go:89] found id: "4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:08:54.093356  245478 cri.go:89] found id: ""
	I1212 20:08:54.093367  245478 logs.go:282] 1 containers: [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5]
	I1212 20:08:54.093424  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:54.098710  245478 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:08:54.098771  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:08:54.132106  245478 cri.go:89] found id: ""
	I1212 20:08:54.132132  245478 logs.go:282] 0 containers: []
	W1212 20:08:54.132143  245478 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:54.132151  245478 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:08:54.132207  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:08:54.166292  245478 cri.go:89] found id: ""
	I1212 20:08:54.166319  245478 logs.go:282] 0 containers: []
	W1212 20:08:54.166330  245478 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:08:54.166341  245478 logs.go:123] Gathering logs for kube-scheduler [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849] ...
	I1212 20:08:54.166356  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:08:54.201423  245478 logs.go:123] Gathering logs for kube-controller-manager [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5] ...
	I1212 20:08:54.201456  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:08:54.234622  245478 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:08:54.077955  260486 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1212 20:08:54.079217  260486 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-824670" to be "Ready" ...
	I1212 20:08:54.316431  260486 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1212 20:08:53.805419  265161 out.go:252]   - Configuring RBAC rules ...
	I1212 20:08:53.805631  265161 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 20:08:53.811830  265161 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 20:08:53.819035  265161 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 20:08:53.822103  265161 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 20:08:53.826097  265161 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 20:08:53.829811  265161 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 20:08:54.153969  265161 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 20:08:54.569794  265161 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1212 20:08:55.155654  265161 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1212 20:08:55.156494  265161 kubeadm.go:319] 
	I1212 20:08:55.156614  265161 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1212 20:08:55.156633  265161 kubeadm.go:319] 
	I1212 20:08:55.156758  265161 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1212 20:08:55.156767  265161 kubeadm.go:319] 
	I1212 20:08:55.156801  265161 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1212 20:08:55.156896  265161 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 20:08:55.156973  265161 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 20:08:55.156983  265161 kubeadm.go:319] 
	I1212 20:08:55.157106  265161 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1212 20:08:55.157126  265161 kubeadm.go:319] 
	I1212 20:08:55.157188  265161 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 20:08:55.157194  265161 kubeadm.go:319] 
	I1212 20:08:55.157261  265161 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1212 20:08:55.157379  265161 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 20:08:55.157462  265161 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 20:08:55.157468  265161 kubeadm.go:319] 
	I1212 20:08:55.157573  265161 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 20:08:55.157683  265161 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1212 20:08:55.157689  265161 kubeadm.go:319] 
	I1212 20:08:55.157787  265161 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ll5dd3.f0k4t0l7ykcnbls2 \
	I1212 20:08:55.157935  265161 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c49fb95e06ac971f3e9f3fbce39707c55b5d19bac31f7f0c0750449d5e9f38c \
	I1212 20:08:55.157966  265161 kubeadm.go:319] 	--control-plane 
	I1212 20:08:55.157972  265161 kubeadm.go:319] 
	I1212 20:08:55.158103  265161 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1212 20:08:55.158108  265161 kubeadm.go:319] 
	I1212 20:08:55.158175  265161 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ll5dd3.f0k4t0l7ykcnbls2 \
	I1212 20:08:55.158267  265161 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c49fb95e06ac971f3e9f3fbce39707c55b5d19bac31f7f0c0750449d5e9f38c 
	I1212 20:08:55.161712  265161 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1212 20:08:55.161921  265161 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 20:08:55.161959  265161 cni.go:84] Creating CNI manager for ""
	I1212 20:08:55.161972  265161 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:08:55.163814  265161 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1212 20:08:55.164923  265161 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 20:08:55.170778  265161 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1212 20:08:55.170805  265161 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1212 20:08:55.189074  265161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 20:08:55.456759  265161 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 20:08:55.456832  265161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:55.456934  265161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-753103 minikube.k8s.io/updated_at=2025_12_12T20_08_55_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300 minikube.k8s.io/name=no-preload-753103 minikube.k8s.io/primary=true
	I1212 20:08:55.550689  265161 ops.go:34] apiserver oom_adj: -16
	I1212 20:08:55.550908  265161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:56.051655  265161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1212 20:08:53.100047  244825 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:53.100070  244825 logs.go:123] Gathering logs for kube-apiserver [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c] ...
	I1212 20:08:53.100086  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:08:53.139859  244825 logs.go:123] Gathering logs for kube-scheduler [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08] ...
	I1212 20:08:53.139890  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:08:53.214334  244825 logs.go:123] Gathering logs for kube-controller-manager [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56] ...
	I1212 20:08:53.214363  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:08:53.249914  244825 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:08:53.249942  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:08:55.795831  244825 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 20:08:55.796300  244825 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 20:08:55.796349  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:08:55.796410  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:08:55.837503  244825 cri.go:89] found id: "ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:08:55.837526  244825 cri.go:89] found id: ""
	I1212 20:08:55.837538  244825 logs.go:282] 1 containers: [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c]
	I1212 20:08:55.837605  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:55.842734  244825 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:08:55.842794  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:08:55.888479  244825 cri.go:89] found id: ""
	I1212 20:08:55.888503  244825 logs.go:282] 0 containers: []
	W1212 20:08:55.888515  244825 logs.go:284] No container was found matching "etcd"
	I1212 20:08:55.888524  244825 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:08:55.888583  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:08:55.935841  244825 cri.go:89] found id: ""
	I1212 20:08:55.935869  244825 logs.go:282] 0 containers: []
	W1212 20:08:55.935879  244825 logs.go:284] No container was found matching "coredns"
	I1212 20:08:55.935886  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:08:55.935940  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:08:55.976004  244825 cri.go:89] found id: "8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:08:55.976267  244825 cri.go:89] found id: ""
	I1212 20:08:55.976298  244825 logs.go:282] 1 containers: [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08]
	I1212 20:08:55.976357  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:55.981297  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:08:55.981367  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:08:56.019615  244825 cri.go:89] found id: ""
	I1212 20:08:56.019635  244825 logs.go:282] 0 containers: []
	W1212 20:08:56.019644  244825 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:56.019650  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:08:56.019702  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:08:56.058661  244825 cri.go:89] found id: "509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:08:56.058679  244825 cri.go:89] found id: ""
	I1212 20:08:56.058687  244825 logs.go:282] 1 containers: [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56]
	I1212 20:08:56.058729  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:56.062946  244825 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:08:56.063004  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:08:56.103948  244825 cri.go:89] found id: ""
	I1212 20:08:56.103974  244825 logs.go:282] 0 containers: []
	W1212 20:08:56.103985  244825 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:56.103992  244825 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:08:56.104049  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:08:56.142834  244825 cri.go:89] found id: ""
	I1212 20:08:56.142859  244825 logs.go:282] 0 containers: []
	W1212 20:08:56.142870  244825 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:08:56.142880  244825 logs.go:123] Gathering logs for kube-apiserver [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c] ...
	I1212 20:08:56.142898  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:08:56.180554  244825 logs.go:123] Gathering logs for kube-scheduler [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08] ...
	I1212 20:08:56.180587  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:08:56.251075  244825 logs.go:123] Gathering logs for kube-controller-manager [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56] ...
	I1212 20:08:56.251103  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:08:56.286036  244825 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:08:56.286064  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:08:56.333119  244825 logs.go:123] Gathering logs for container status ...
	I1212 20:08:56.333147  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:56.369513  244825 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:56.369536  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:56.462099  244825 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:56.462131  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:56.477910  244825 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:56.477936  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:56.536389  244825 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:54.317775  260486 addons.go:530] duration metric: took 652.399388ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1212 20:08:54.583080  260486 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-824670" context rescaled to 1 replicas
	W1212 20:08:56.083264  260486 node_ready.go:57] node "old-k8s-version-824670" has "Ready":"False" status (will retry)
	W1212 20:08:58.583322  260486 node_ready.go:57] node "old-k8s-version-824670" has "Ready":"False" status (will retry)
	I1212 20:08:54.236223  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:08:54.307681  245478 logs.go:123] Gathering logs for container status ...
	I1212 20:08:54.307732  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:54.346919  245478 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:54.346959  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:54.464774  245478 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:54.464805  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:54.478843  245478 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:54.478866  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:54.536619  245478 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:54.536641  245478 logs.go:123] Gathering logs for kube-apiserver [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998] ...
	I1212 20:08:54.536656  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:08:57.073060  245478 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 20:08:57.073477  245478 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1212 20:08:57.073526  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:08:57.073568  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:08:57.101256  245478 cri.go:89] found id: "ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:08:57.101301  245478 cri.go:89] found id: ""
	I1212 20:08:57.101313  245478 logs.go:282] 1 containers: [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998]
	I1212 20:08:57.101358  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:57.106018  245478 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:08:57.106078  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:08:57.136416  245478 cri.go:89] found id: ""
	I1212 20:08:57.136441  245478 logs.go:282] 0 containers: []
	W1212 20:08:57.136452  245478 logs.go:284] No container was found matching "etcd"
	I1212 20:08:57.136461  245478 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:08:57.136525  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:08:57.161989  245478 cri.go:89] found id: ""
	I1212 20:08:57.162013  245478 logs.go:282] 0 containers: []
	W1212 20:08:57.162021  245478 logs.go:284] No container was found matching "coredns"
	I1212 20:08:57.162029  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:08:57.162085  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:08:57.187899  245478 cri.go:89] found id: "02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:08:57.187921  245478 cri.go:89] found id: ""
	I1212 20:08:57.187930  245478 logs.go:282] 1 containers: [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849]
	I1212 20:08:57.187980  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:57.191712  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:08:57.191779  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:08:57.216675  245478 cri.go:89] found id: ""
	I1212 20:08:57.216697  245478 logs.go:282] 0 containers: []
	W1212 20:08:57.216707  245478 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:57.216713  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:08:57.216766  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:08:57.241743  245478 cri.go:89] found id: "4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:08:57.241766  245478 cri.go:89] found id: ""
	I1212 20:08:57.241774  245478 logs.go:282] 1 containers: [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5]
	I1212 20:08:57.241832  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:08:57.245476  245478 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:08:57.245532  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:08:57.268665  245478 cri.go:89] found id: ""
	I1212 20:08:57.268682  245478 logs.go:282] 0 containers: []
	W1212 20:08:57.268689  245478 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:57.268694  245478 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:08:57.268729  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:08:57.291927  245478 cri.go:89] found id: ""
	I1212 20:08:57.291951  245478 logs.go:282] 0 containers: []
	W1212 20:08:57.291961  245478 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:08:57.291973  245478 logs.go:123] Gathering logs for container status ...
	I1212 20:08:57.291986  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:57.320628  245478 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:57.320654  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:08:57.402141  245478 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:57.402169  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:57.415875  245478 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:57.415896  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:57.469540  245478 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:57.469564  245478 logs.go:123] Gathering logs for kube-apiserver [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998] ...
	I1212 20:08:57.469582  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:08:57.499342  245478 logs.go:123] Gathering logs for kube-scheduler [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849] ...
	I1212 20:08:57.499367  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:08:57.523890  245478 logs.go:123] Gathering logs for kube-controller-manager [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5] ...
	I1212 20:08:57.523918  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:08:57.548713  245478 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:08:57.548738  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:08:56.551295  265161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:57.051766  265161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:57.550979  265161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:58.051576  265161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:58.551315  265161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:59.051482  265161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:08:59.551500  265161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:09:00.051915  265161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:09:00.118198  265161 kubeadm.go:1114] duration metric: took 4.661436508s to wait for elevateKubeSystemPrivileges
	I1212 20:09:00.118233  265161 kubeadm.go:403] duration metric: took 12.305771851s to StartCluster
	I1212 20:09:00.118253  265161 settings.go:142] acquiring lock: {Name:mk7b47e673a4fe47e1d03b804f21c4b8c19bdcb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:09:00.118351  265161 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 20:09:00.119631  265161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/kubeconfig: {Name:mkf945a9079d724e4b1deff5cd122f8b2719dc1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:09:00.119863  265161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 20:09:00.119873  265161 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:09:00.119939  265161 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 20:09:00.120068  265161 addons.go:70] Setting storage-provisioner=true in profile "no-preload-753103"
	I1212 20:09:00.120089  265161 addons.go:239] Setting addon storage-provisioner=true in "no-preload-753103"
	I1212 20:09:00.120089  265161 config.go:182] Loaded profile config "no-preload-753103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:09:00.120096  265161 addons.go:70] Setting default-storageclass=true in profile "no-preload-753103"
	I1212 20:09:00.120124  265161 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-753103"
	I1212 20:09:00.120138  265161 host.go:66] Checking if "no-preload-753103" exists ...
	I1212 20:09:00.120553  265161 cli_runner.go:164] Run: docker container inspect no-preload-753103 --format={{.State.Status}}
	I1212 20:09:00.120733  265161 cli_runner.go:164] Run: docker container inspect no-preload-753103 --format={{.State.Status}}
	I1212 20:09:00.121393  265161 out.go:179] * Verifying Kubernetes components...
	I1212 20:09:00.122791  265161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:09:00.147916  265161 addons.go:239] Setting addon default-storageclass=true in "no-preload-753103"
	I1212 20:09:00.147976  265161 host.go:66] Checking if "no-preload-753103" exists ...
	I1212 20:09:00.148527  265161 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:09:00.148620  265161 cli_runner.go:164] Run: docker container inspect no-preload-753103 --format={{.State.Status}}
	I1212 20:09:00.149864  265161 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:09:00.149901  265161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 20:09:00.149954  265161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-753103
	I1212 20:09:00.189446  265161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/no-preload-753103/id_rsa Username:docker}
	I1212 20:09:00.195707  265161 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 20:09:00.195733  265161 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 20:09:00.195789  265161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-753103
	I1212 20:09:00.220877  265161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/no-preload-753103/id_rsa Username:docker}
	I1212 20:09:00.224655  265161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 20:09:00.289097  265161 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:09:00.312693  265161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:09:00.331369  265161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:09:00.415592  265161 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1212 20:09:00.417660  265161 node_ready.go:35] waiting up to 6m0s for node "no-preload-753103" to be "Ready" ...
	I1212 20:09:00.647875  265161 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1212 20:09:00.648868  265161 addons.go:530] duration metric: took 528.937079ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1212 20:09:00.920160  265161 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-753103" context rescaled to 1 replicas
	I1212 20:08:59.037358  244825 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 20:08:59.037781  244825 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 20:08:59.037847  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:08:59.037903  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:08:59.079792  244825 cri.go:89] found id: "ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:08:59.079862  244825 cri.go:89] found id: ""
	I1212 20:08:59.079885  244825 logs.go:282] 1 containers: [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c]
	I1212 20:08:59.079952  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:59.084535  244825 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:08:59.084600  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:08:59.120623  244825 cri.go:89] found id: ""
	I1212 20:08:59.120647  244825 logs.go:282] 0 containers: []
	W1212 20:08:59.120657  244825 logs.go:284] No container was found matching "etcd"
	I1212 20:08:59.120664  244825 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:08:59.120725  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:08:59.166947  244825 cri.go:89] found id: ""
	I1212 20:08:59.166972  244825 logs.go:282] 0 containers: []
	W1212 20:08:59.166980  244825 logs.go:284] No container was found matching "coredns"
	I1212 20:08:59.166987  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:08:59.167051  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:08:59.201085  244825 cri.go:89] found id: "8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:08:59.201107  244825 cri.go:89] found id: ""
	I1212 20:08:59.201114  244825 logs.go:282] 1 containers: [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08]
	I1212 20:08:59.201160  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:59.204775  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:08:59.204825  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:08:59.237433  244825 cri.go:89] found id: ""
	I1212 20:08:59.237456  244825 logs.go:282] 0 containers: []
	W1212 20:08:59.237467  244825 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:08:59.237475  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:08:59.237523  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:08:59.272242  244825 cri.go:89] found id: "509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:08:59.272260  244825 cri.go:89] found id: ""
	I1212 20:08:59.272267  244825 logs.go:282] 1 containers: [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56]
	I1212 20:08:59.272333  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:08:59.275883  244825 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:08:59.275951  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:08:59.311291  244825 cri.go:89] found id: ""
	I1212 20:08:59.311316  244825 logs.go:282] 0 containers: []
	W1212 20:08:59.311326  244825 logs.go:284] No container was found matching "kindnet"
	I1212 20:08:59.311338  244825 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:08:59.311407  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:08:59.343420  244825 cri.go:89] found id: ""
	I1212 20:08:59.343447  244825 logs.go:282] 0 containers: []
	W1212 20:08:59.343457  244825 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:08:59.343469  244825 logs.go:123] Gathering logs for dmesg ...
	I1212 20:08:59.343482  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:08:59.360360  244825 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:08:59.360387  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:08:59.417068  244825 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:08:59.417090  244825 logs.go:123] Gathering logs for kube-apiserver [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c] ...
	I1212 20:08:59.417110  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:08:59.454912  244825 logs.go:123] Gathering logs for kube-scheduler [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08] ...
	I1212 20:08:59.454942  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:08:59.520691  244825 logs.go:123] Gathering logs for kube-controller-manager [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56] ...
	I1212 20:08:59.520717  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:08:59.554112  244825 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:08:59.554134  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:08:59.603891  244825 logs.go:123] Gathering logs for container status ...
	I1212 20:08:59.603926  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:08:59.642900  244825 logs.go:123] Gathering logs for kubelet ...
	I1212 20:08:59.642929  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:02.230066  244825 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 20:09:02.230528  244825 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 20:09:02.230588  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:09:02.230652  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:09:02.267435  244825 cri.go:89] found id: "ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:09:02.267456  244825 cri.go:89] found id: ""
	I1212 20:09:02.267465  244825 logs.go:282] 1 containers: [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c]
	I1212 20:09:02.267538  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:09:02.271944  244825 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:09:02.272021  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:09:02.305788  244825 cri.go:89] found id: ""
	I1212 20:09:02.305810  244825 logs.go:282] 0 containers: []
	W1212 20:09:02.305818  244825 logs.go:284] No container was found matching "etcd"
	I1212 20:09:02.305824  244825 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:09:02.305868  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:09:02.341059  244825 cri.go:89] found id: ""
	I1212 20:09:02.341083  244825 logs.go:282] 0 containers: []
	W1212 20:09:02.341094  244825 logs.go:284] No container was found matching "coredns"
	I1212 20:09:02.341102  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:09:02.341152  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:09:02.378325  244825 cri.go:89] found id: "8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:09:02.378348  244825 cri.go:89] found id: ""
	I1212 20:09:02.378356  244825 logs.go:282] 1 containers: [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08]
	I1212 20:09:02.378418  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:09:02.382187  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:09:02.382244  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:09:02.415045  244825 cri.go:89] found id: ""
	I1212 20:09:02.415069  244825 logs.go:282] 0 containers: []
	W1212 20:09:02.415081  244825 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:02.415088  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:09:02.415144  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:09:02.449295  244825 cri.go:89] found id: "509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:09:02.449319  244825 cri.go:89] found id: ""
	I1212 20:09:02.449329  244825 logs.go:282] 1 containers: [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56]
	I1212 20:09:02.449378  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:09:02.453707  244825 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:09:02.453768  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:09:02.490605  244825 cri.go:89] found id: ""
	I1212 20:09:02.490631  244825 logs.go:282] 0 containers: []
	W1212 20:09:02.490642  244825 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:02.490649  244825 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:09:02.490703  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:09:02.526082  244825 cri.go:89] found id: ""
	I1212 20:09:02.526109  244825 logs.go:282] 0 containers: []
	W1212 20:09:02.526121  244825 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:09:02.526133  244825 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:02.526143  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:02.615472  244825 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:02.615509  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:02.631426  244825 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:02.631451  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:02.689589  244825 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:02.689614  244825 logs.go:123] Gathering logs for kube-apiserver [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c] ...
	I1212 20:09:02.689631  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:09:02.726590  244825 logs.go:123] Gathering logs for kube-scheduler [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08] ...
	I1212 20:09:02.726619  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:09:02.793016  244825 logs.go:123] Gathering logs for kube-controller-manager [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56] ...
	I1212 20:09:02.793045  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:09:02.830324  244825 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:09:02.830353  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:09:02.876390  244825 logs.go:123] Gathering logs for container status ...
	I1212 20:09:02.876420  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 20:09:01.082905  260486 node_ready.go:57] node "old-k8s-version-824670" has "Ready":"False" status (will retry)
	W1212 20:09:03.084104  260486 node_ready.go:57] node "old-k8s-version-824670" has "Ready":"False" status (will retry)
	I1212 20:09:00.107828  245478 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 20:09:00.108260  245478 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1212 20:09:00.108343  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:09:00.108396  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:09:00.144384  245478 cri.go:89] found id: "ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:09:00.144409  245478 cri.go:89] found id: ""
	I1212 20:09:00.144419  245478 logs.go:282] 1 containers: [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998]
	I1212 20:09:00.144473  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:09:00.150583  245478 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:09:00.150667  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:09:00.208006  245478 cri.go:89] found id: ""
	I1212 20:09:00.208032  245478 logs.go:282] 0 containers: []
	W1212 20:09:00.208042  245478 logs.go:284] No container was found matching "etcd"
	I1212 20:09:00.208050  245478 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:09:00.208102  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:09:00.245921  245478 cri.go:89] found id: ""
	I1212 20:09:00.245943  245478 logs.go:282] 0 containers: []
	W1212 20:09:00.245953  245478 logs.go:284] No container was found matching "coredns"
	I1212 20:09:00.245961  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:09:00.246014  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:09:00.282487  245478 cri.go:89] found id: "02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:09:00.282510  245478 cri.go:89] found id: ""
	I1212 20:09:00.282520  245478 logs.go:282] 1 containers: [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849]
	I1212 20:09:00.282579  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:09:00.287707  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:09:00.287772  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:09:00.320553  245478 cri.go:89] found id: ""
	I1212 20:09:00.320574  245478 logs.go:282] 0 containers: []
	W1212 20:09:00.320582  245478 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:00.320590  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:09:00.320632  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:09:00.357974  245478 cri.go:89] found id: "4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:09:00.358000  245478 cri.go:89] found id: ""
	I1212 20:09:00.358011  245478 logs.go:282] 1 containers: [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5]
	I1212 20:09:00.358068  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:09:00.363628  245478 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:09:00.363739  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:09:00.399218  245478 cri.go:89] found id: ""
	I1212 20:09:00.399241  245478 logs.go:282] 0 containers: []
	W1212 20:09:00.399249  245478 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:00.399254  245478 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:09:00.399315  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:09:00.435949  245478 cri.go:89] found id: ""
	I1212 20:09:00.435973  245478 logs.go:282] 0 containers: []
	W1212 20:09:00.435984  245478 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:09:00.435993  245478 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:09:00.436008  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:09:00.509785  245478 logs.go:123] Gathering logs for container status ...
	I1212 20:09:00.509813  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:00.549799  245478 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:00.549830  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:00.653717  245478 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:00.653743  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:00.668375  245478 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:00.668401  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:00.722555  245478 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:00.722577  245478 logs.go:123] Gathering logs for kube-apiserver [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998] ...
	I1212 20:09:00.722592  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:09:00.752797  245478 logs.go:123] Gathering logs for kube-scheduler [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849] ...
	I1212 20:09:00.752823  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:09:00.779395  245478 logs.go:123] Gathering logs for kube-controller-manager [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5] ...
	I1212 20:09:00.779426  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:09:03.310519  245478 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 20:09:03.310877  245478 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1212 20:09:03.310937  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:09:03.310987  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:09:03.337348  245478 cri.go:89] found id: "ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:09:03.337369  245478 cri.go:89] found id: ""
	I1212 20:09:03.337377  245478 logs.go:282] 1 containers: [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998]
	I1212 20:09:03.337436  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:09:03.341354  245478 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:09:03.341413  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:09:03.366226  245478 cri.go:89] found id: ""
	I1212 20:09:03.366252  245478 logs.go:282] 0 containers: []
	W1212 20:09:03.366262  245478 logs.go:284] No container was found matching "etcd"
	I1212 20:09:03.366284  245478 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:09:03.366347  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:09:03.390931  245478 cri.go:89] found id: ""
	I1212 20:09:03.390952  245478 logs.go:282] 0 containers: []
	W1212 20:09:03.390962  245478 logs.go:284] No container was found matching "coredns"
	I1212 20:09:03.390970  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:09:03.391020  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:09:03.414799  245478 cri.go:89] found id: "02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:09:03.414820  245478 cri.go:89] found id: ""
	I1212 20:09:03.414830  245478 logs.go:282] 1 containers: [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849]
	I1212 20:09:03.414874  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:09:03.418421  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:09:03.418480  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:09:03.443496  245478 cri.go:89] found id: ""
	I1212 20:09:03.443516  245478 logs.go:282] 0 containers: []
	W1212 20:09:03.443524  245478 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:03.443537  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:09:03.443589  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:09:03.469224  245478 cri.go:89] found id: "4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:09:03.469246  245478 cri.go:89] found id: ""
	I1212 20:09:03.469256  245478 logs.go:282] 1 containers: [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5]
	I1212 20:09:03.469340  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:09:03.472971  245478 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:09:03.473017  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:09:03.496711  245478 cri.go:89] found id: ""
	I1212 20:09:03.496731  245478 logs.go:282] 0 containers: []
	W1212 20:09:03.496739  245478 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:03.496745  245478 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:09:03.496802  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:09:03.520325  245478 cri.go:89] found id: ""
	I1212 20:09:03.520349  245478 logs.go:282] 0 containers: []
	W1212 20:09:03.520358  245478 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:09:03.520366  245478 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:03.520380  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:03.533205  245478 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:03.533225  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:03.586183  245478 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:03.586201  245478 logs.go:123] Gathering logs for kube-apiserver [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998] ...
	I1212 20:09:03.586212  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:09:03.614522  245478 logs.go:123] Gathering logs for kube-scheduler [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849] ...
	I1212 20:09:03.614544  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:09:03.639883  245478 logs.go:123] Gathering logs for kube-controller-manager [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5] ...
	I1212 20:09:03.639910  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:09:03.664803  245478 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:09:03.664825  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:09:03.718505  245478 logs.go:123] Gathering logs for container status ...
	I1212 20:09:03.718531  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:03.746292  245478 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:03.746315  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1212 20:09:02.420833  265161 node_ready.go:57] node "no-preload-753103" has "Ready":"False" status (will retry)
	W1212 20:09:04.421150  265161 node_ready.go:57] node "no-preload-753103" has "Ready":"False" status (will retry)
	I1212 20:09:05.414545  244825 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 20:09:05.414935  244825 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 20:09:05.414982  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:09:05.415027  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:09:05.449032  244825 cri.go:89] found id: "ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:09:05.449048  244825 cri.go:89] found id: ""
	I1212 20:09:05.449056  244825 logs.go:282] 1 containers: [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c]
	I1212 20:09:05.449104  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:09:05.452787  244825 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:09:05.452844  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:09:05.486982  244825 cri.go:89] found id: ""
	I1212 20:09:05.487005  244825 logs.go:282] 0 containers: []
	W1212 20:09:05.487015  244825 logs.go:284] No container was found matching "etcd"
	I1212 20:09:05.487023  244825 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:09:05.487074  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:09:05.519711  244825 cri.go:89] found id: ""
	I1212 20:09:05.519734  244825 logs.go:282] 0 containers: []
	W1212 20:09:05.519743  244825 logs.go:284] No container was found matching "coredns"
	I1212 20:09:05.519750  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:09:05.519802  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:09:05.553576  244825 cri.go:89] found id: "8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:09:05.553594  244825 cri.go:89] found id: ""
	I1212 20:09:05.553603  244825 logs.go:282] 1 containers: [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08]
	I1212 20:09:05.553655  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:09:05.557137  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:09:05.557192  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:09:05.589898  244825 cri.go:89] found id: ""
	I1212 20:09:05.589926  244825 logs.go:282] 0 containers: []
	W1212 20:09:05.589933  244825 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:05.589974  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:09:05.590020  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:09:05.622238  244825 cri.go:89] found id: "509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:09:05.622255  244825 cri.go:89] found id: ""
	I1212 20:09:05.622263  244825 logs.go:282] 1 containers: [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56]
	I1212 20:09:05.622323  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:09:05.625635  244825 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:09:05.625692  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:09:05.657993  244825 cri.go:89] found id: ""
	I1212 20:09:05.658016  244825 logs.go:282] 0 containers: []
	W1212 20:09:05.658026  244825 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:05.658034  244825 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:09:05.658077  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:09:05.689973  244825 cri.go:89] found id: ""
	I1212 20:09:05.689993  244825 logs.go:282] 0 containers: []
	W1212 20:09:05.689999  244825 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:09:05.690007  244825 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:09:05.690017  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:09:05.735898  244825 logs.go:123] Gathering logs for container status ...
	I1212 20:09:05.735922  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:05.771304  244825 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:05.771329  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:05.860371  244825 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:05.860400  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:05.876224  244825 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:05.876252  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:05.935378  244825 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:05.935400  244825 logs.go:123] Gathering logs for kube-apiserver [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c] ...
	I1212 20:09:05.935415  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:09:05.973013  244825 logs.go:123] Gathering logs for kube-scheduler [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08] ...
	I1212 20:09:05.973040  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:09:06.040032  244825 logs.go:123] Gathering logs for kube-controller-manager [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56] ...
	I1212 20:09:06.040058  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	W1212 20:09:05.582191  260486 node_ready.go:57] node "old-k8s-version-824670" has "Ready":"False" status (will retry)
	I1212 20:09:07.082443  260486 node_ready.go:49] node "old-k8s-version-824670" is "Ready"
	I1212 20:09:07.082468  260486 node_ready.go:38] duration metric: took 13.003226267s for node "old-k8s-version-824670" to be "Ready" ...
	I1212 20:09:07.082481  260486 api_server.go:52] waiting for apiserver process to appear ...
	I1212 20:09:07.082524  260486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:07.094351  260486 api_server.go:72] duration metric: took 13.428906809s to wait for apiserver process to appear ...
	I1212 20:09:07.094373  260486 api_server.go:88] waiting for apiserver healthz status ...
	I1212 20:09:07.094387  260486 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1212 20:09:07.099476  260486 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1212 20:09:07.100618  260486 api_server.go:141] control plane version: v1.28.0
	I1212 20:09:07.100640  260486 api_server.go:131] duration metric: took 6.262135ms to wait for apiserver health ...
	I1212 20:09:07.100647  260486 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 20:09:07.104672  260486 system_pods.go:59] 8 kube-system pods found
	I1212 20:09:07.104724  260486 system_pods.go:61] "coredns-5dd5756b68-shgbw" [2a42f31d-a757-492d-bd0f-539953154a92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 20:09:07.104738  260486 system_pods.go:61] "etcd-old-k8s-version-824670" [e3c6e799-4dac-4c0c-8063-2574684473bd] Running
	I1212 20:09:07.104752  260486 system_pods.go:61] "kindnet-75qr9" [16750e71-744f-4d14-9c72-513a0ef89bd9] Running
	I1212 20:09:07.104765  260486 system_pods.go:61] "kube-apiserver-old-k8s-version-824670" [d744d324-f28f-4417-bd24-10f31d44d033] Running
	I1212 20:09:07.104771  260486 system_pods.go:61] "kube-controller-manager-old-k8s-version-824670" [a546cec2-5f43-4c0a-b310-07fa485e55c4] Running
	I1212 20:09:07.104775  260486 system_pods.go:61] "kube-proxy-nwrgl" [500e6acc-e453-4e40-81df-5d6db1f0f764] Running
	I1212 20:09:07.104787  260486 system_pods.go:61] "kube-scheduler-old-k8s-version-824670" [87d76929-c951-4faf-8216-7c61d544cadb] Running
	I1212 20:09:07.104798  260486 system_pods.go:61] "storage-provisioner" [c9aec911-e8c8-4ff9-8e8c-2d5e27b5812e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 20:09:07.104805  260486 system_pods.go:74] duration metric: took 4.151469ms to wait for pod list to return data ...
	I1212 20:09:07.104813  260486 default_sa.go:34] waiting for default service account to be created ...
	I1212 20:09:07.107059  260486 default_sa.go:45] found service account: "default"
	I1212 20:09:07.107075  260486 default_sa.go:55] duration metric: took 2.25761ms for default service account to be created ...
	I1212 20:09:07.107083  260486 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 20:09:07.109832  260486 system_pods.go:86] 8 kube-system pods found
	I1212 20:09:07.109862  260486 system_pods.go:89] "coredns-5dd5756b68-shgbw" [2a42f31d-a757-492d-bd0f-539953154a92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 20:09:07.109868  260486 system_pods.go:89] "etcd-old-k8s-version-824670" [e3c6e799-4dac-4c0c-8063-2574684473bd] Running
	I1212 20:09:07.109874  260486 system_pods.go:89] "kindnet-75qr9" [16750e71-744f-4d14-9c72-513a0ef89bd9] Running
	I1212 20:09:07.109880  260486 system_pods.go:89] "kube-apiserver-old-k8s-version-824670" [d744d324-f28f-4417-bd24-10f31d44d033] Running
	I1212 20:09:07.109888  260486 system_pods.go:89] "kube-controller-manager-old-k8s-version-824670" [a546cec2-5f43-4c0a-b310-07fa485e55c4] Running
	I1212 20:09:07.109895  260486 system_pods.go:89] "kube-proxy-nwrgl" [500e6acc-e453-4e40-81df-5d6db1f0f764] Running
	I1212 20:09:07.109900  260486 system_pods.go:89] "kube-scheduler-old-k8s-version-824670" [87d76929-c951-4faf-8216-7c61d544cadb] Running
	I1212 20:09:07.109909  260486 system_pods.go:89] "storage-provisioner" [c9aec911-e8c8-4ff9-8e8c-2d5e27b5812e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 20:09:07.109931  260486 retry.go:31] will retry after 218.897242ms: missing components: kube-dns
	I1212 20:09:07.332507  260486 system_pods.go:86] 8 kube-system pods found
	I1212 20:09:07.332531  260486 system_pods.go:89] "coredns-5dd5756b68-shgbw" [2a42f31d-a757-492d-bd0f-539953154a92] Running
	I1212 20:09:07.332536  260486 system_pods.go:89] "etcd-old-k8s-version-824670" [e3c6e799-4dac-4c0c-8063-2574684473bd] Running
	I1212 20:09:07.332540  260486 system_pods.go:89] "kindnet-75qr9" [16750e71-744f-4d14-9c72-513a0ef89bd9] Running
	I1212 20:09:07.332544  260486 system_pods.go:89] "kube-apiserver-old-k8s-version-824670" [d744d324-f28f-4417-bd24-10f31d44d033] Running
	I1212 20:09:07.332548  260486 system_pods.go:89] "kube-controller-manager-old-k8s-version-824670" [a546cec2-5f43-4c0a-b310-07fa485e55c4] Running
	I1212 20:09:07.332551  260486 system_pods.go:89] "kube-proxy-nwrgl" [500e6acc-e453-4e40-81df-5d6db1f0f764] Running
	I1212 20:09:07.332556  260486 system_pods.go:89] "kube-scheduler-old-k8s-version-824670" [87d76929-c951-4faf-8216-7c61d544cadb] Running
	I1212 20:09:07.332561  260486 system_pods.go:89] "storage-provisioner" [c9aec911-e8c8-4ff9-8e8c-2d5e27b5812e] Running
	I1212 20:09:07.332570  260486 system_pods.go:126] duration metric: took 225.480662ms to wait for k8s-apps to be running ...
	I1212 20:09:07.332586  260486 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 20:09:07.332636  260486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:09:07.345473  260486 system_svc.go:56] duration metric: took 12.877831ms WaitForService to wait for kubelet
	I1212 20:09:07.345507  260486 kubeadm.go:587] duration metric: took 13.680064163s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 20:09:07.345532  260486 node_conditions.go:102] verifying NodePressure condition ...
	I1212 20:09:07.347822  260486 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1212 20:09:07.347840  260486 node_conditions.go:123] node cpu capacity is 8
	I1212 20:09:07.347856  260486 node_conditions.go:105] duration metric: took 2.317514ms to run NodePressure ...
	I1212 20:09:07.347871  260486 start.go:242] waiting for startup goroutines ...
	I1212 20:09:07.347884  260486 start.go:247] waiting for cluster config update ...
	I1212 20:09:07.347897  260486 start.go:256] writing updated cluster config ...
	I1212 20:09:07.348148  260486 ssh_runner.go:195] Run: rm -f paused
	I1212 20:09:07.351774  260486 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 20:09:07.355201  260486 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-shgbw" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:07.359003  260486 pod_ready.go:94] pod "coredns-5dd5756b68-shgbw" is "Ready"
	I1212 20:09:07.359018  260486 pod_ready.go:86] duration metric: took 3.794603ms for pod "coredns-5dd5756b68-shgbw" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:07.361266  260486 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-824670" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:07.364828  260486 pod_ready.go:94] pod "etcd-old-k8s-version-824670" is "Ready"
	I1212 20:09:07.364845  260486 pod_ready.go:86] duration metric: took 3.545245ms for pod "etcd-old-k8s-version-824670" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:07.367102  260486 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-824670" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:07.370556  260486 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-824670" is "Ready"
	I1212 20:09:07.370572  260486 pod_ready.go:86] duration metric: took 3.454086ms for pod "kube-apiserver-old-k8s-version-824670" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:07.372722  260486 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-824670" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:07.756094  260486 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-824670" is "Ready"
	I1212 20:09:07.756119  260486 pod_ready.go:86] duration metric: took 383.382536ms for pod "kube-controller-manager-old-k8s-version-824670" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:07.956908  260486 pod_ready.go:83] waiting for pod "kube-proxy-nwrgl" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:08.355318  260486 pod_ready.go:94] pod "kube-proxy-nwrgl" is "Ready"
	I1212 20:09:08.355340  260486 pod_ready.go:86] duration metric: took 398.410194ms for pod "kube-proxy-nwrgl" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:08.556646  260486 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-824670" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:08.956352  260486 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-824670" is "Ready"
	I1212 20:09:08.956380  260486 pod_ready.go:86] duration metric: took 399.711158ms for pod "kube-scheduler-old-k8s-version-824670" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:08.956397  260486 pod_ready.go:40] duration metric: took 1.604599301s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 20:09:09.007023  260486 start.go:625] kubectl: 1.34.3, cluster: 1.28.0 (minor skew: 6)
	I1212 20:09:09.008435  260486 out.go:203] 
	W1212 20:09:09.009546  260486 out.go:285] ! /usr/local/bin/kubectl is version 1.34.3, which may have incompatibilities with Kubernetes 1.28.0.
	I1212 20:09:09.010578  260486 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1212 20:09:09.012097  260486 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-824670" cluster and "default" namespace by default
	I1212 20:09:06.335479  245478 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 20:09:06.335842  245478 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1212 20:09:06.335889  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:09:06.335935  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:09:06.361020  245478 cri.go:89] found id: "ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:09:06.361037  245478 cri.go:89] found id: ""
	I1212 20:09:06.361045  245478 logs.go:282] 1 containers: [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998]
	I1212 20:09:06.361102  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:09:06.364916  245478 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:09:06.364979  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:09:06.390399  245478 cri.go:89] found id: ""
	I1212 20:09:06.390422  245478 logs.go:282] 0 containers: []
	W1212 20:09:06.390428  245478 logs.go:284] No container was found matching "etcd"
	I1212 20:09:06.390434  245478 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:09:06.390478  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:09:06.415074  245478 cri.go:89] found id: ""
	I1212 20:09:06.415099  245478 logs.go:282] 0 containers: []
	W1212 20:09:06.415108  245478 logs.go:284] No container was found matching "coredns"
	I1212 20:09:06.415114  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:09:06.415153  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:09:06.440338  245478 cri.go:89] found id: "02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:09:06.440354  245478 cri.go:89] found id: ""
	I1212 20:09:06.440361  245478 logs.go:282] 1 containers: [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849]
	I1212 20:09:06.440408  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:09:06.443937  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:09:06.443994  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:09:06.469244  245478 cri.go:89] found id: ""
	I1212 20:09:06.469282  245478 logs.go:282] 0 containers: []
	W1212 20:09:06.469294  245478 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:06.469302  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:09:06.469354  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:09:06.494742  245478 cri.go:89] found id: "4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:09:06.494766  245478 cri.go:89] found id: ""
	I1212 20:09:06.494776  245478 logs.go:282] 1 containers: [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5]
	I1212 20:09:06.494827  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:09:06.498685  245478 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:09:06.498752  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:09:06.524955  245478 cri.go:89] found id: ""
	I1212 20:09:06.524980  245478 logs.go:282] 0 containers: []
	W1212 20:09:06.524990  245478 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:06.524999  245478 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:09:06.525056  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:09:06.550840  245478 cri.go:89] found id: ""
	I1212 20:09:06.550862  245478 logs.go:282] 0 containers: []
	W1212 20:09:06.550869  245478 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:09:06.550878  245478 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:06.550891  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:06.565171  245478 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:06.565196  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:06.622946  245478 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:06.622970  245478 logs.go:123] Gathering logs for kube-apiserver [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998] ...
	I1212 20:09:06.622990  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:09:06.657325  245478 logs.go:123] Gathering logs for kube-scheduler [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849] ...
	I1212 20:09:06.657352  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:09:06.682891  245478 logs.go:123] Gathering logs for kube-controller-manager [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5] ...
	I1212 20:09:06.682917  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:09:06.708585  245478 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:09:06.708609  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:09:06.760549  245478 logs.go:123] Gathering logs for container status ...
	I1212 20:09:06.760574  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:06.788545  245478 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:06.788574  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1212 20:09:06.920591  265161 node_ready.go:57] node "no-preload-753103" has "Ready":"False" status (will retry)
	W1212 20:09:09.420880  265161 node_ready.go:57] node "no-preload-753103" has "Ready":"False" status (will retry)
	I1212 20:09:08.573589  244825 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 20:09:08.573949  244825 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1212 20:09:08.573998  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:09:08.574041  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:09:08.608590  244825 cri.go:89] found id: "ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:09:08.608608  244825 cri.go:89] found id: ""
	I1212 20:09:08.608620  244825 logs.go:282] 1 containers: [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c]
	I1212 20:09:08.608663  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:09:08.612214  244825 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:09:08.612263  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:09:08.644786  244825 cri.go:89] found id: ""
	I1212 20:09:08.644807  244825 logs.go:282] 0 containers: []
	W1212 20:09:08.644815  244825 logs.go:284] No container was found matching "etcd"
	I1212 20:09:08.644820  244825 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:09:08.644860  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:09:08.677788  244825 cri.go:89] found id: ""
	I1212 20:09:08.677806  244825 logs.go:282] 0 containers: []
	W1212 20:09:08.677813  244825 logs.go:284] No container was found matching "coredns"
	I1212 20:09:08.677829  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:09:08.677881  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:09:08.710126  244825 cri.go:89] found id: "8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:09:08.710151  244825 cri.go:89] found id: ""
	I1212 20:09:08.710161  244825 logs.go:282] 1 containers: [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08]
	I1212 20:09:08.710215  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:09:08.713669  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:09:08.713724  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:09:08.746264  244825 cri.go:89] found id: ""
	I1212 20:09:08.746301  244825 logs.go:282] 0 containers: []
	W1212 20:09:08.746311  244825 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:08.746317  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:09:08.746360  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:09:08.779961  244825 cri.go:89] found id: "509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:09:08.779981  244825 cri.go:89] found id: ""
	I1212 20:09:08.779989  244825 logs.go:282] 1 containers: [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56]
	I1212 20:09:08.780031  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:09:08.783576  244825 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:09:08.783628  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:09:08.815490  244825 cri.go:89] found id: ""
	I1212 20:09:08.815508  244825 logs.go:282] 0 containers: []
	W1212 20:09:08.815515  244825 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:08.815522  244825 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:09:08.815574  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:09:08.848479  244825 cri.go:89] found id: ""
	I1212 20:09:08.848499  244825 logs.go:282] 0 containers: []
	W1212 20:09:08.848506  244825 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:09:08.848514  244825 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:08.848525  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:08.905468  244825 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:08.905488  244825 logs.go:123] Gathering logs for kube-apiserver [ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c] ...
	I1212 20:09:08.905500  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:09:08.942181  244825 logs.go:123] Gathering logs for kube-scheduler [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08] ...
	I1212 20:09:08.942204  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:09:09.020878  244825 logs.go:123] Gathering logs for kube-controller-manager [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56] ...
	I1212 20:09:09.020908  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:09:09.065058  244825 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:09:09.065083  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:09:09.114484  244825 logs.go:123] Gathering logs for container status ...
	I1212 20:09:09.114512  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:09.154042  244825 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:09.154070  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:09.249742  244825 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:09.249771  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:11.767020  244825 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1212 20:09:09.373667  245478 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 20:09:09.374044  245478 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1212 20:09:09.374095  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:09:09.374148  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:09:09.400018  245478 cri.go:89] found id: "ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:09:09.400039  245478 cri.go:89] found id: ""
	I1212 20:09:09.400047  245478 logs.go:282] 1 containers: [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998]
	I1212 20:09:09.400087  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:09:09.403828  245478 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:09:09.403877  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:09:09.429257  245478 cri.go:89] found id: ""
	I1212 20:09:09.429286  245478 logs.go:282] 0 containers: []
	W1212 20:09:09.429297  245478 logs.go:284] No container was found matching "etcd"
	I1212 20:09:09.429304  245478 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:09:09.429362  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:09:09.454646  245478 cri.go:89] found id: ""
	I1212 20:09:09.454667  245478 logs.go:282] 0 containers: []
	W1212 20:09:09.454676  245478 logs.go:284] No container was found matching "coredns"
	I1212 20:09:09.454689  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:09:09.454741  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:09:09.479854  245478 cri.go:89] found id: "02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:09:09.479874  245478 cri.go:89] found id: ""
	I1212 20:09:09.479884  245478 logs.go:282] 1 containers: [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849]
	I1212 20:09:09.479946  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:09:09.483922  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:09:09.483977  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:09:09.512707  245478 cri.go:89] found id: ""
	I1212 20:09:09.512731  245478 logs.go:282] 0 containers: []
	W1212 20:09:09.512742  245478 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:09.512751  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:09:09.512806  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:09:09.538698  245478 cri.go:89] found id: "4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:09:09.538723  245478 cri.go:89] found id: ""
	I1212 20:09:09.538733  245478 logs.go:282] 1 containers: [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5]
	I1212 20:09:09.538778  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:09:09.542727  245478 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:09:09.542799  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:09:09.567315  245478 cri.go:89] found id: ""
	I1212 20:09:09.567337  245478 logs.go:282] 0 containers: []
	W1212 20:09:09.567348  245478 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:09.567355  245478 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:09:09.567410  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:09:09.591384  245478 cri.go:89] found id: ""
	I1212 20:09:09.591409  245478 logs.go:282] 0 containers: []
	W1212 20:09:09.591418  245478 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:09:09.591427  245478 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:09.591436  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:09:09.644722  245478 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:09:09.644741  245478 logs.go:123] Gathering logs for kube-apiserver [ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998] ...
	I1212 20:09:09.644757  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:09:09.672822  245478 logs.go:123] Gathering logs for kube-scheduler [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849] ...
	I1212 20:09:09.672846  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:09:09.696229  245478 logs.go:123] Gathering logs for kube-controller-manager [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5] ...
	I1212 20:09:09.696250  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:09:09.720906  245478 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:09:09.720928  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:09:09.775396  245478 logs.go:123] Gathering logs for container status ...
	I1212 20:09:09.775419  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:09.804131  245478 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:09.804151  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:09.886660  245478 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:09.886686  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:12.401525  245478 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W1212 20:09:11.421115  265161 node_ready.go:57] node "no-preload-753103" has "Ready":"False" status (will retry)
	I1212 20:09:13.420801  265161 node_ready.go:49] node "no-preload-753103" is "Ready"
	I1212 20:09:13.420826  265161 node_ready.go:38] duration metric: took 13.003141419s for node "no-preload-753103" to be "Ready" ...
	I1212 20:09:13.420842  265161 api_server.go:52] waiting for apiserver process to appear ...
	I1212 20:09:13.420896  265161 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:09:13.432723  265161 api_server.go:72] duration metric: took 13.312820705s to wait for apiserver process to appear ...
	I1212 20:09:13.432745  265161 api_server.go:88] waiting for apiserver healthz status ...
	I1212 20:09:13.432762  265161 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 20:09:13.438395  265161 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1212 20:09:13.439394  265161 api_server.go:141] control plane version: v1.35.0-beta.0
	I1212 20:09:13.439431  265161 api_server.go:131] duration metric: took 6.678569ms to wait for apiserver health ...
	I1212 20:09:13.439442  265161 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 20:09:13.442831  265161 system_pods.go:59] 8 kube-system pods found
	I1212 20:09:13.442865  265161 system_pods.go:61] "coredns-7d764666f9-pbqw6" [d3962c56-5385-4b85-b38e-85af8a8ac8ef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 20:09:13.442873  265161 system_pods.go:61] "etcd-no-preload-753103" [9e43fd30-82c9-4ff4-a7af-e7a3853c2fc0] Running
	I1212 20:09:13.442900  265161 system_pods.go:61] "kindnet-p4b57" [cde1edf5-2032-4960-96aa-39781736a4c4] Running
	I1212 20:09:13.442909  265161 system_pods.go:61] "kube-apiserver-no-preload-753103" [7a5f7400-b1bb-4114-9086-44e2467aa1c5] Running
	I1212 20:09:13.442916  265161 system_pods.go:61] "kube-controller-manager-no-preload-753103" [47c59f91-8737-462a-9db7-7c3cca251be8] Running
	I1212 20:09:13.442921  265161 system_pods.go:61] "kube-proxy-xn425" [e9aeda8a-4980-4713-aeaf-72c392c221c8] Running
	I1212 20:09:13.442934  265161 system_pods.go:61] "kube-scheduler-no-preload-753103" [0ed98851-8887-4f23-88dc-f51c8431a83c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 20:09:13.442942  265161 system_pods.go:61] "storage-provisioner" [e682308a-054b-4838-85fd-f5925e146ee3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 20:09:13.442952  265161 system_pods.go:74] duration metric: took 3.503011ms to wait for pod list to return data ...
	I1212 20:09:13.442964  265161 default_sa.go:34] waiting for default service account to be created ...
	I1212 20:09:13.445920  265161 default_sa.go:45] found service account: "default"
	I1212 20:09:13.445942  265161 default_sa.go:55] duration metric: took 2.971793ms for default service account to be created ...
	I1212 20:09:13.445952  265161 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 20:09:13.449153  265161 system_pods.go:86] 8 kube-system pods found
	I1212 20:09:13.449182  265161 system_pods.go:89] "coredns-7d764666f9-pbqw6" [d3962c56-5385-4b85-b38e-85af8a8ac8ef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 20:09:13.449190  265161 system_pods.go:89] "etcd-no-preload-753103" [9e43fd30-82c9-4ff4-a7af-e7a3853c2fc0] Running
	I1212 20:09:13.449202  265161 system_pods.go:89] "kindnet-p4b57" [cde1edf5-2032-4960-96aa-39781736a4c4] Running
	I1212 20:09:13.449208  265161 system_pods.go:89] "kube-apiserver-no-preload-753103" [7a5f7400-b1bb-4114-9086-44e2467aa1c5] Running
	I1212 20:09:13.449217  265161 system_pods.go:89] "kube-controller-manager-no-preload-753103" [47c59f91-8737-462a-9db7-7c3cca251be8] Running
	I1212 20:09:13.449222  265161 system_pods.go:89] "kube-proxy-xn425" [e9aeda8a-4980-4713-aeaf-72c392c221c8] Running
	I1212 20:09:13.449233  265161 system_pods.go:89] "kube-scheduler-no-preload-753103" [0ed98851-8887-4f23-88dc-f51c8431a83c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 20:09:13.449238  265161 system_pods.go:89] "storage-provisioner" [e682308a-054b-4838-85fd-f5925e146ee3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 20:09:13.449293  265161 retry.go:31] will retry after 228.908933ms: missing components: kube-dns
	I1212 20:09:13.682039  265161 system_pods.go:86] 8 kube-system pods found
	I1212 20:09:13.682066  265161 system_pods.go:89] "coredns-7d764666f9-pbqw6" [d3962c56-5385-4b85-b38e-85af8a8ac8ef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 20:09:13.682072  265161 system_pods.go:89] "etcd-no-preload-753103" [9e43fd30-82c9-4ff4-a7af-e7a3853c2fc0] Running
	I1212 20:09:13.682079  265161 system_pods.go:89] "kindnet-p4b57" [cde1edf5-2032-4960-96aa-39781736a4c4] Running
	I1212 20:09:13.682082  265161 system_pods.go:89] "kube-apiserver-no-preload-753103" [7a5f7400-b1bb-4114-9086-44e2467aa1c5] Running
	I1212 20:09:13.682088  265161 system_pods.go:89] "kube-controller-manager-no-preload-753103" [47c59f91-8737-462a-9db7-7c3cca251be8] Running
	I1212 20:09:13.682094  265161 system_pods.go:89] "kube-proxy-xn425" [e9aeda8a-4980-4713-aeaf-72c392c221c8] Running
	I1212 20:09:13.682102  265161 system_pods.go:89] "kube-scheduler-no-preload-753103" [0ed98851-8887-4f23-88dc-f51c8431a83c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 20:09:13.682107  265161 system_pods.go:89] "storage-provisioner" [e682308a-054b-4838-85fd-f5925e146ee3] Running
	I1212 20:09:13.682126  265161 retry.go:31] will retry after 381.228296ms: missing components: kube-dns
	I1212 20:09:14.066919  265161 system_pods.go:86] 8 kube-system pods found
	I1212 20:09:14.066948  265161 system_pods.go:89] "coredns-7d764666f9-pbqw6" [d3962c56-5385-4b85-b38e-85af8a8ac8ef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 20:09:14.066953  265161 system_pods.go:89] "etcd-no-preload-753103" [9e43fd30-82c9-4ff4-a7af-e7a3853c2fc0] Running
	I1212 20:09:14.066959  265161 system_pods.go:89] "kindnet-p4b57" [cde1edf5-2032-4960-96aa-39781736a4c4] Running
	I1212 20:09:14.066962  265161 system_pods.go:89] "kube-apiserver-no-preload-753103" [7a5f7400-b1bb-4114-9086-44e2467aa1c5] Running
	I1212 20:09:14.066971  265161 system_pods.go:89] "kube-controller-manager-no-preload-753103" [47c59f91-8737-462a-9db7-7c3cca251be8] Running
	I1212 20:09:14.066976  265161 system_pods.go:89] "kube-proxy-xn425" [e9aeda8a-4980-4713-aeaf-72c392c221c8] Running
	I1212 20:09:14.066983  265161 system_pods.go:89] "kube-scheduler-no-preload-753103" [0ed98851-8887-4f23-88dc-f51c8431a83c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 20:09:14.066990  265161 system_pods.go:89] "storage-provisioner" [e682308a-054b-4838-85fd-f5925e146ee3] Running
	I1212 20:09:14.067009  265161 retry.go:31] will retry after 488.244704ms: missing components: kube-dns
	I1212 20:09:14.557983  265161 system_pods.go:86] 8 kube-system pods found
	I1212 20:09:14.558010  265161 system_pods.go:89] "coredns-7d764666f9-pbqw6" [d3962c56-5385-4b85-b38e-85af8a8ac8ef] Running
	I1212 20:09:14.558015  265161 system_pods.go:89] "etcd-no-preload-753103" [9e43fd30-82c9-4ff4-a7af-e7a3853c2fc0] Running
	I1212 20:09:14.558020  265161 system_pods.go:89] "kindnet-p4b57" [cde1edf5-2032-4960-96aa-39781736a4c4] Running
	I1212 20:09:14.558023  265161 system_pods.go:89] "kube-apiserver-no-preload-753103" [7a5f7400-b1bb-4114-9086-44e2467aa1c5] Running
	I1212 20:09:14.558029  265161 system_pods.go:89] "kube-controller-manager-no-preload-753103" [47c59f91-8737-462a-9db7-7c3cca251be8] Running
	I1212 20:09:14.558034  265161 system_pods.go:89] "kube-proxy-xn425" [e9aeda8a-4980-4713-aeaf-72c392c221c8] Running
	I1212 20:09:14.558046  265161 system_pods.go:89] "kube-scheduler-no-preload-753103" [0ed98851-8887-4f23-88dc-f51c8431a83c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 20:09:14.558057  265161 system_pods.go:89] "storage-provisioner" [e682308a-054b-4838-85fd-f5925e146ee3] Running
	I1212 20:09:14.558068  265161 system_pods.go:126] duration metric: took 1.112109785s to wait for k8s-apps to be running ...
	I1212 20:09:14.558080  265161 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 20:09:14.558119  265161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:09:14.570464  265161 system_svc.go:56] duration metric: took 12.375782ms WaitForService to wait for kubelet
	I1212 20:09:14.570488  265161 kubeadm.go:587] duration metric: took 14.450590539s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 20:09:14.570505  265161 node_conditions.go:102] verifying NodePressure condition ...
	I1212 20:09:14.572539  265161 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1212 20:09:14.572561  265161 node_conditions.go:123] node cpu capacity is 8
	I1212 20:09:14.572577  265161 node_conditions.go:105] duration metric: took 2.065626ms to run NodePressure ...
	I1212 20:09:14.572590  265161 start.go:242] waiting for startup goroutines ...
	I1212 20:09:14.572603  265161 start.go:247] waiting for cluster config update ...
	I1212 20:09:14.572621  265161 start.go:256] writing updated cluster config ...
	I1212 20:09:14.572868  265161 ssh_runner.go:195] Run: rm -f paused
	I1212 20:09:14.576398  265161 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 20:09:14.579043  265161 pod_ready.go:83] waiting for pod "coredns-7d764666f9-pbqw6" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:14.582351  265161 pod_ready.go:94] pod "coredns-7d764666f9-pbqw6" is "Ready"
	I1212 20:09:14.582370  265161 pod_ready.go:86] duration metric: took 3.309431ms for pod "coredns-7d764666f9-pbqw6" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:14.583980  265161 pod_ready.go:83] waiting for pod "etcd-no-preload-753103" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:14.587324  265161 pod_ready.go:94] pod "etcd-no-preload-753103" is "Ready"
	I1212 20:09:14.587342  265161 pod_ready.go:86] duration metric: took 3.345068ms for pod "etcd-no-preload-753103" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:14.590996  265161 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-753103" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:14.594241  265161 pod_ready.go:94] pod "kube-apiserver-no-preload-753103" is "Ready"
	I1212 20:09:14.594261  265161 pod_ready.go:86] duration metric: took 3.248013ms for pod "kube-apiserver-no-preload-753103" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:14.595945  265161 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-753103" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:14.979814  265161 pod_ready.go:94] pod "kube-controller-manager-no-preload-753103" is "Ready"
	I1212 20:09:14.979844  265161 pod_ready.go:86] duration metric: took 383.881079ms for pod "kube-controller-manager-no-preload-753103" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:15.181064  265161 pod_ready.go:83] waiting for pod "kube-proxy-xn425" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:15.580629  265161 pod_ready.go:94] pod "kube-proxy-xn425" is "Ready"
	I1212 20:09:15.580651  265161 pod_ready.go:86] duration metric: took 399.55808ms for pod "kube-proxy-xn425" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:15.780735  265161 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-753103" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:16.180820  265161 pod_ready.go:94] pod "kube-scheduler-no-preload-753103" is "Ready"
	I1212 20:09:16.180849  265161 pod_ready.go:86] duration metric: took 400.091787ms for pod "kube-scheduler-no-preload-753103" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:09:16.180865  265161 pod_ready.go:40] duration metric: took 1.604438666s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 20:09:16.226513  265161 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1212 20:09:16.230096  265161 out.go:179] * Done! kubectl is now configured to use "no-preload-753103" cluster and "default" namespace by default
	I1212 20:09:16.768352  244825 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1212 20:09:16.768416  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:09:16.768469  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:09:16.805892  244825 cri.go:89] found id: "74075c8b2a54f355454d069698932118dd69b5a74b5cdf4f61da665032a426bf"
	I1212 20:09:16.805914  244825 cri.go:89] found id: "ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c"
	I1212 20:09:16.805918  244825 cri.go:89] found id: ""
	I1212 20:09:16.805925  244825 logs.go:282] 2 containers: [74075c8b2a54f355454d069698932118dd69b5a74b5cdf4f61da665032a426bf ce2e5e461967b6c1cd2d9c21bdf9a8d18521e88d23d6efe46f2c3bc4afb8884c]
	I1212 20:09:16.805966  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:09:16.810059  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:09:16.813688  244825 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:09:16.813745  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:09:16.847667  244825 cri.go:89] found id: ""
	I1212 20:09:16.847690  244825 logs.go:282] 0 containers: []
	W1212 20:09:16.847698  244825 logs.go:284] No container was found matching "etcd"
	I1212 20:09:16.847703  244825 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:09:16.847758  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:09:16.883043  244825 cri.go:89] found id: ""
	I1212 20:09:16.883065  244825 logs.go:282] 0 containers: []
	W1212 20:09:16.883073  244825 logs.go:284] No container was found matching "coredns"
	I1212 20:09:16.883082  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:09:16.883130  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:09:16.927697  244825 cri.go:89] found id: "8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:09:16.927721  244825 cri.go:89] found id: ""
	I1212 20:09:16.927731  244825 logs.go:282] 1 containers: [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08]
	I1212 20:09:16.927793  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:09:16.931889  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:09:16.931951  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:09:16.969000  244825 cri.go:89] found id: ""
	I1212 20:09:16.969026  244825 logs.go:282] 0 containers: []
	W1212 20:09:16.969036  244825 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:16.969044  244825 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:09:16.969098  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:09:17.007815  244825 cri.go:89] found id: "17137275d052a006cd29121762c11ef2b3ce800e9c872150a1048384bc311c23"
	I1212 20:09:17.007840  244825 cri.go:89] found id: "509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:09:17.007846  244825 cri.go:89] found id: ""
	I1212 20:09:17.007856  244825 logs.go:282] 2 containers: [17137275d052a006cd29121762c11ef2b3ce800e9c872150a1048384bc311c23 509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56]
	I1212 20:09:17.007906  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:09:17.012155  244825 ssh_runner.go:195] Run: which crictl
	I1212 20:09:17.015793  244825 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:09:17.015840  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:09:17.055465  244825 cri.go:89] found id: ""
	I1212 20:09:17.055486  244825 logs.go:282] 0 containers: []
	W1212 20:09:17.055496  244825 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:17.055504  244825 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:09:17.055556  244825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:09:17.104621  244825 cri.go:89] found id: ""
	I1212 20:09:17.104655  244825 logs.go:282] 0 containers: []
	W1212 20:09:17.104670  244825 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:09:17.104694  244825 logs.go:123] Gathering logs for kubelet ...
	I1212 20:09:17.104720  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:09:17.207676  244825 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:17.207714  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:17.231197  244825 logs.go:123] Gathering logs for kube-scheduler [8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08] ...
	I1212 20:09:17.231235  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ce5d68889a41dd5b0eae817c0f3c7a30dc81e0e9f730856a105ac95dac8aa08"
	I1212 20:09:17.327554  244825 logs.go:123] Gathering logs for kube-controller-manager [509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56] ...
	I1212 20:09:17.327586  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 509e8c0b07c662145667ff955ee8fd6ffa43309c4806d3c57eddfaf5c9166a56"
	I1212 20:09:17.365842  244825 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:09:17.365875  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:09:17.428694  244825 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:17.428731  244825 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 20:09:17.403367  245478 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1212 20:09:17.403429  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:09:17.403474  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:09:17.433708  245478 cri.go:89] found id: "c53041282dcf5c5b69f68d4c1e73c1539a57eaa02a64345a72f74d2480b7ed41"
	I1212 20:09:17.433732  245478 cri.go:89] found id: "ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998"
	I1212 20:09:17.433738  245478 cri.go:89] found id: ""
	I1212 20:09:17.433754  245478 logs.go:282] 2 containers: [c53041282dcf5c5b69f68d4c1e73c1539a57eaa02a64345a72f74d2480b7ed41 ccc934f2bd19eea07e82038eef30ec665a62c7c9c9c1fbc78495eb74f5ee8998]
	I1212 20:09:17.433811  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:09:17.438004  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:09:17.442500  245478 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:09:17.442561  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:09:17.470007  245478 cri.go:89] found id: ""
	I1212 20:09:17.470068  245478 logs.go:282] 0 containers: []
	W1212 20:09:17.470094  245478 logs.go:284] No container was found matching "etcd"
	I1212 20:09:17.470102  245478 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:09:17.470154  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:09:17.501176  245478 cri.go:89] found id: ""
	I1212 20:09:17.501202  245478 logs.go:282] 0 containers: []
	W1212 20:09:17.501213  245478 logs.go:284] No container was found matching "coredns"
	I1212 20:09:17.501224  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:09:17.501331  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:09:17.530824  245478 cri.go:89] found id: "02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:09:17.530847  245478 cri.go:89] found id: ""
	I1212 20:09:17.530854  245478 logs.go:282] 1 containers: [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849]
	I1212 20:09:17.530901  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:09:17.534865  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:09:17.534920  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:09:17.562532  245478 cri.go:89] found id: ""
	I1212 20:09:17.562559  245478 logs.go:282] 0 containers: []
	W1212 20:09:17.562568  245478 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:09:17.562576  245478 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:09:17.562632  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:09:17.590918  245478 cri.go:89] found id: "4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5"
	I1212 20:09:17.590940  245478 cri.go:89] found id: ""
	I1212 20:09:17.590950  245478 logs.go:282] 1 containers: [4d50cf5e34c7a021f325790c6945eee94924163477eb357c644eedd7d541bac5]
	I1212 20:09:17.591006  245478 ssh_runner.go:195] Run: which crictl
	I1212 20:09:17.595708  245478 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:09:17.595773  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:09:17.623302  245478 cri.go:89] found id: ""
	I1212 20:09:17.623324  245478 logs.go:282] 0 containers: []
	W1212 20:09:17.623334  245478 logs.go:284] No container was found matching "kindnet"
	I1212 20:09:17.623341  245478 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:09:17.623386  245478 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:09:17.651430  245478 cri.go:89] found id: ""
	I1212 20:09:17.651449  245478 logs.go:282] 0 containers: []
	W1212 20:09:17.651456  245478 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:09:17.651471  245478 logs.go:123] Gathering logs for kube-scheduler [02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849] ...
	I1212 20:09:17.651488  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02e1af13f16303139143507fbaab695056651ed682da2b47f7b3c1705ea98849"
	I1212 20:09:17.677613  245478 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:09:17.677633  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:09:17.735249  245478 logs.go:123] Gathering logs for container status ...
	I1212 20:09:17.735283  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:09:17.766662  245478 logs.go:123] Gathering logs for dmesg ...
	I1212 20:09:17.766690  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:09:17.782227  245478 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:09:17.782259  245478 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	
	
	==> CRI-O <==
	Dec 12 20:09:13 no-preload-753103 crio[765]: time="2025-12-12T20:09:13.478852865Z" level=info msg="Starting container: 810cafa8aa8481c61fe01904e4747d1eda001b6975d42693851534b7250d1c22" id=11f1fc44-a364-40f2-9057-0edb35a40cde name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 20:09:13 no-preload-753103 crio[765]: time="2025-12-12T20:09:13.480832444Z" level=info msg="Started container" PID=2820 containerID=810cafa8aa8481c61fe01904e4747d1eda001b6975d42693851534b7250d1c22 description=kube-system/coredns-7d764666f9-pbqw6/coredns id=11f1fc44-a364-40f2-9057-0edb35a40cde name=/runtime.v1.RuntimeService/StartContainer sandboxID=61dbec3109f8b469c0096c094dd6ad790899c072c86d5db0934441d5ea68d45d
	Dec 12 20:09:16 no-preload-753103 crio[765]: time="2025-12-12T20:09:16.714589563Z" level=info msg="Running pod sandbox: default/busybox/POD" id=98e4cb07-2e16-4ac3-9a75-5159631a5df9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 20:09:16 no-preload-753103 crio[765]: time="2025-12-12T20:09:16.714667481Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:09:16 no-preload-753103 crio[765]: time="2025-12-12T20:09:16.719662924Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:63e9d60e296dd5441d911e41106cac2bea8be6b3a2a5b772b45aa87026128de8 UID:3b9946fe-7d9a-4087-960d-57c19ff595d9 NetNS:/var/run/netns/ffa79fb5-0bf2-4a38-92af-121882360ea6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000121050}] Aliases:map[]}"
	Dec 12 20:09:16 no-preload-753103 crio[765]: time="2025-12-12T20:09:16.719690529Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 12 20:09:16 no-preload-753103 crio[765]: time="2025-12-12T20:09:16.73104147Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:63e9d60e296dd5441d911e41106cac2bea8be6b3a2a5b772b45aa87026128de8 UID:3b9946fe-7d9a-4087-960d-57c19ff595d9 NetNS:/var/run/netns/ffa79fb5-0bf2-4a38-92af-121882360ea6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000121050}] Aliases:map[]}"
	Dec 12 20:09:16 no-preload-753103 crio[765]: time="2025-12-12T20:09:16.731189697Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 12 20:09:16 no-preload-753103 crio[765]: time="2025-12-12T20:09:16.731937604Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 12 20:09:16 no-preload-753103 crio[765]: time="2025-12-12T20:09:16.732728609Z" level=info msg="Ran pod sandbox 63e9d60e296dd5441d911e41106cac2bea8be6b3a2a5b772b45aa87026128de8 with infra container: default/busybox/POD" id=98e4cb07-2e16-4ac3-9a75-5159631a5df9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 20:09:16 no-preload-753103 crio[765]: time="2025-12-12T20:09:16.734026416Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=800950ff-c1ee-4ed1-a3d6-8f48d53177d5 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:09:16 no-preload-753103 crio[765]: time="2025-12-12T20:09:16.734147519Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=800950ff-c1ee-4ed1-a3d6-8f48d53177d5 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:09:16 no-preload-753103 crio[765]: time="2025-12-12T20:09:16.734193396Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=800950ff-c1ee-4ed1-a3d6-8f48d53177d5 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:09:16 no-preload-753103 crio[765]: time="2025-12-12T20:09:16.735017612Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=88599097-314c-4f55-bf70-a762c2232863 name=/runtime.v1.ImageService/PullImage
	Dec 12 20:09:16 no-preload-753103 crio[765]: time="2025-12-12T20:09:16.736482151Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 12 20:09:17 no-preload-753103 crio[765]: time="2025-12-12T20:09:17.356400635Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=88599097-314c-4f55-bf70-a762c2232863 name=/runtime.v1.ImageService/PullImage
	Dec 12 20:09:17 no-preload-753103 crio[765]: time="2025-12-12T20:09:17.357031611Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=480786be-c8b4-432d-b50f-7cff368a7d78 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:09:17 no-preload-753103 crio[765]: time="2025-12-12T20:09:17.358596524Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=74700236-3be5-44ba-b377-85209e17b257 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:09:17 no-preload-753103 crio[765]: time="2025-12-12T20:09:17.361819636Z" level=info msg="Creating container: default/busybox/busybox" id=1f2c8949-35ab-4a19-b1ea-93d0cb6b9180 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:09:17 no-preload-753103 crio[765]: time="2025-12-12T20:09:17.361947572Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:09:17 no-preload-753103 crio[765]: time="2025-12-12T20:09:17.366302551Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:09:17 no-preload-753103 crio[765]: time="2025-12-12T20:09:17.366852684Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:09:17 no-preload-753103 crio[765]: time="2025-12-12T20:09:17.394512721Z" level=info msg="Created container e3e0da7f3169d63c08b09a976642f975ca64a93517b5e85f1c848d2bc9b4925b: default/busybox/busybox" id=1f2c8949-35ab-4a19-b1ea-93d0cb6b9180 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:09:17 no-preload-753103 crio[765]: time="2025-12-12T20:09:17.395159363Z" level=info msg="Starting container: e3e0da7f3169d63c08b09a976642f975ca64a93517b5e85f1c848d2bc9b4925b" id=7284c1dc-8647-47b1-afc9-faf7da303b90 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 20:09:17 no-preload-753103 crio[765]: time="2025-12-12T20:09:17.396880809Z" level=info msg="Started container" PID=2892 containerID=e3e0da7f3169d63c08b09a976642f975ca64a93517b5e85f1c848d2bc9b4925b description=default/busybox/busybox id=7284c1dc-8647-47b1-afc9-faf7da303b90 name=/runtime.v1.RuntimeService/StartContainer sandboxID=63e9d60e296dd5441d911e41106cac2bea8be6b3a2a5b772b45aa87026128de8
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	e3e0da7f3169d       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   63e9d60e296dd       busybox                                     default
	810cafa8aa848       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      12 seconds ago      Running             coredns                   0                   61dbec3109f8b       coredns-7d764666f9-pbqw6                    kube-system
	8276db3edc188       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   5575e93198cd3       storage-provisioner                         kube-system
	589794d572226       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    23 seconds ago      Running             kindnet-cni               0                   dfb5d8aa964cb       kindnet-p4b57                               kube-system
	5ad079e0a7417       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                      25 seconds ago      Running             kube-proxy                0                   1abc70c806a88       kube-proxy-xn425                            kube-system
	16c1160e75ac1       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      35 seconds ago      Running             etcd                      0                   b2f34fe597064       etcd-no-preload-753103                      kube-system
	e6adda7a92d15       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                      35 seconds ago      Running             kube-scheduler            0                   a12d6460ae944       kube-scheduler-no-preload-753103            kube-system
	5aa8664880610       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                      35 seconds ago      Running             kube-controller-manager   0                   e676febab00bc       kube-controller-manager-no-preload-753103   kube-system
	6efd75d089e74       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                      35 seconds ago      Running             kube-apiserver            0                   879ef353cfac7       kube-apiserver-no-preload-753103            kube-system
	
	
	==> coredns [810cafa8aa8481c61fe01904e4747d1eda001b6975d42693851534b7250d1c22] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:53882 - 28834 "HINFO IN 7923280302980003576.2019293216646643731. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.063259042s
	
	
	==> describe nodes <==
	Name:               no-preload-753103
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-753103
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300
	                    minikube.k8s.io/name=no-preload-753103
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T20_08_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 20:08:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-753103
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 20:09:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 20:09:25 +0000   Fri, 12 Dec 2025 20:08:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 20:09:25 +0000   Fri, 12 Dec 2025 20:08:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 20:09:25 +0000   Fri, 12 Dec 2025 20:08:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 20:09:25 +0000   Fri, 12 Dec 2025 20:09:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-753103
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c9831a2813dcaeb3cc17b596693b7bac
	  System UUID:                f5184786-74a4-443d-967a-ec8e68a8cf1e
	  Boot ID:                    50e046d9-33b0-4107-918f-f0a8d9513b10
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-7d764666f9-pbqw6                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-no-preload-753103                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-p4b57                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-no-preload-753103             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-no-preload-753103    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-xn425                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-no-preload-753103             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  26s   node-controller  Node no-preload-753103 event: Registered Node no-preload-753103 in Controller
	
	
	==> dmesg <==
	[  +0.086327] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026088] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.855140] kauditd_printk_skb: 47 callbacks suppressed
	[Dec12 19:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.016395] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023890] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023861] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023912] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023894] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +2.047754] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +4.031589] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +8.448131] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[Dec12 19:32] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[ +32.252714] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	
	
	==> etcd [16c1160e75ac17ece5ba3616ba2569afc6bf20eb5aeb3fd4b9dd488f9d6ee4ef] <==
	{"level":"warn","ts":"2025-12-12T20:08:50.711770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:08:50.725429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:08:50.732658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:08:50.739388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:08:50.746793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:08:50.755366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:08:50.763049Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:08:50.771394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:08:50.782320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:08:50.788664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:08:50.795180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:08:50.801991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:08:50.816641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:08:50.823738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:08:50.829888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:08:50.836408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:08:50.887955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:08:52.718782Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"211.098444ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-12T20:08:52.718906Z","caller":"traceutil/trace.go:172","msg":"trace[1350465338] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:152; }","duration":"211.239692ms","start":"2025-12-12T20:08:52.507629Z","end":"2025-12-12T20:08:52.718869Z","steps":["trace[1350465338] 'range keys from in-memory index tree'  (duration: 206.40538ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-12T20:08:52.719312Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"206.477347ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597681539356819 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/system:controller:namespace-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:controller:namespace-controller\" value_size:648 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-12-12T20:08:52.719483Z","caller":"traceutil/trace.go:172","msg":"trace[1154531141] transaction","detail":"{read_only:false; response_revision:154; number_of_response:1; }","duration":"179.676854ms","start":"2025-12-12T20:08:52.539795Z","end":"2025-12-12T20:08:52.719472Z","steps":["trace[1154531141] 'process raft request'  (duration: 179.617486ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T20:08:52.719491Z","caller":"traceutil/trace.go:172","msg":"trace[1966865515] transaction","detail":"{read_only:false; response_revision:153; number_of_response:1; }","duration":"230.307312ms","start":"2025-12-12T20:08:52.489168Z","end":"2025-12-12T20:08:52.719475Z","steps":["trace[1966865515] 'process raft request'  (duration: 23.146687ms)","trace[1966865515] 'compare'  (duration: 206.367501ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-12T20:08:52.812149Z","caller":"traceutil/trace.go:172","msg":"trace[572755683] transaction","detail":"{read_only:false; response_revision:155; number_of_response:1; }","duration":"251.418633ms","start":"2025-12-12T20:08:52.560711Z","end":"2025-12-12T20:08:52.812129Z","steps":["trace[572755683] 'process raft request'  (duration: 251.335068ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T20:08:52.945495Z","caller":"traceutil/trace.go:172","msg":"trace[1436914561] transaction","detail":"{read_only:false; response_revision:156; number_of_response:1; }","duration":"130.965791ms","start":"2025-12-12T20:08:52.814498Z","end":"2025-12-12T20:08:52.945464Z","steps":["trace[1436914561] 'process raft request'  (duration: 62.575301ms)","trace[1436914561] 'compare'  (duration: 68.276505ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-12T20:08:53.134854Z","caller":"traceutil/trace.go:172","msg":"trace[1578050831] transaction","detail":"{read_only:false; response_revision:158; number_of_response:1; }","duration":"122.90283ms","start":"2025-12-12T20:08:53.011920Z","end":"2025-12-12T20:08:53.134823Z","steps":["trace[1578050831] 'process raft request'  (duration: 57.731359ms)","trace[1578050831] 'compare'  (duration: 65.05566ms)"],"step_count":2}
	
	
	==> kernel <==
	 20:09:25 up 51 min,  0 user,  load average: 1.25, 1.66, 1.36
	Linux no-preload-753103 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [589794d5722260fef15570a5308cd4fcac5515dcbb6fe48a426ef8e208495bfb] <==
	I1212 20:09:02.520070       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1212 20:09:02.520687       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1212 20:09:02.520800       1 main.go:148] setting mtu 1500 for CNI 
	I1212 20:09:02.520816       1 main.go:178] kindnetd IP family: "ipv4"
	I1212 20:09:02.520835       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-12T20:09:02Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1212 20:09:02.815747       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1212 20:09:02.816313       1 controller.go:381] "Waiting for informer caches to sync"
	I1212 20:09:02.816335       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1212 20:09:02.816491       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1212 20:09:03.116490       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1212 20:09:03.116514       1 metrics.go:72] Registering metrics
	I1212 20:09:03.116573       1 controller.go:711] "Syncing nftables rules"
	I1212 20:09:12.816727       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1212 20:09:12.816799       1 main.go:301] handling current node
	I1212 20:09:22.819682       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1212 20:09:22.819718       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6efd75d089e74ca5a4c99df356d9d5302d75e8a58f9e1ca901acbaf1ecb57c3c] <==
	I1212 20:08:51.405709       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1212 20:08:51.405709       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1212 20:08:51.406527       1 controller.go:667] quota admission added evaluator for: namespaces
	I1212 20:08:51.408644       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1212 20:08:51.411427       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 20:08:51.412997       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 20:08:51.604174       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 20:08:52.310329       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1212 20:08:52.313878       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1212 20:08:52.313896       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1212 20:08:53.538343       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 20:08:53.576305       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 20:08:53.717596       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1212 20:08:53.727115       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1212 20:08:53.728349       1 controller.go:667] quota admission added evaluator for: endpoints
	I1212 20:08:53.734774       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 20:08:54.335796       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1212 20:08:54.561107       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1212 20:08:54.569043       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1212 20:08:54.575542       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1212 20:09:00.034706       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1212 20:09:00.139546       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1212 20:09:00.204210       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 20:09:00.210710       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1212 20:09:24.495189       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:36668: use of closed network connection
	
	
	==> kube-controller-manager [5aa86648806107052e3bb0abed286b99e064ea7e86b3d88364eec9d2bf0c2a52] <==
	I1212 20:08:59.143448       1 shared_informer.go:377] "Caches are synced"
	I1212 20:08:59.143393       1 shared_informer.go:377] "Caches are synced"
	I1212 20:08:59.143479       1 shared_informer.go:377] "Caches are synced"
	I1212 20:08:59.143672       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1212 20:08:59.143709       1 range_allocator.go:177] "Sending events to api server"
	I1212 20:08:59.143742       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-753103"
	I1212 20:08:59.143854       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1212 20:08:59.143904       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1212 20:08:59.143969       1 shared_informer.go:370] "Waiting for caches to sync"
	I1212 20:08:59.144002       1 shared_informer.go:377] "Caches are synced"
	I1212 20:08:59.143404       1 shared_informer.go:377] "Caches are synced"
	I1212 20:08:59.144300       1 shared_informer.go:377] "Caches are synced"
	I1212 20:08:59.144506       1 shared_informer.go:377] "Caches are synced"
	I1212 20:08:59.144580       1 shared_informer.go:377] "Caches are synced"
	I1212 20:08:59.144690       1 shared_informer.go:377] "Caches are synced"
	I1212 20:08:59.144883       1 shared_informer.go:377] "Caches are synced"
	I1212 20:08:59.145340       1 shared_informer.go:377] "Caches are synced"
	I1212 20:08:59.145979       1 shared_informer.go:370] "Waiting for caches to sync"
	I1212 20:08:59.151246       1 shared_informer.go:377] "Caches are synced"
	I1212 20:08:59.160575       1 range_allocator.go:433] "Set node PodCIDR" node="no-preload-753103" podCIDRs=["10.244.0.0/24"]
	I1212 20:08:59.240295       1 shared_informer.go:377] "Caches are synced"
	I1212 20:08:59.240315       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1212 20:08:59.240320       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1212 20:08:59.246165       1 shared_informer.go:377] "Caches are synced"
	I1212 20:09:14.145972       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [5ad079e0a741701cdc241c274cfc4ec0ef8b5a4b16e184e3a50c3b05caae33b4] <==
	I1212 20:09:00.603006       1 server_linux.go:53] "Using iptables proxy"
	I1212 20:09:00.674805       1 shared_informer.go:370] "Waiting for caches to sync"
	I1212 20:09:00.775668       1 shared_informer.go:377] "Caches are synced"
	I1212 20:09:00.775705       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1212 20:09:00.775807       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 20:09:00.796313       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 20:09:00.796359       1 server_linux.go:136] "Using iptables Proxier"
	I1212 20:09:00.802025       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 20:09:00.802359       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1212 20:09:00.802438       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 20:09:00.803944       1 config.go:309] "Starting node config controller"
	I1212 20:09:00.803959       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 20:09:00.803967       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 20:09:00.804289       1 config.go:200] "Starting service config controller"
	I1212 20:09:00.804307       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 20:09:00.804317       1 config.go:106] "Starting endpoint slice config controller"
	I1212 20:09:00.804332       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 20:09:00.804390       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 20:09:00.804416       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 20:09:00.905363       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1212 20:09:00.905373       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1212 20:09:00.905410       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e6adda7a92d154ef0d859f917428f7d9348e1a470bee7e5943501a4e61590371] <==
	E1212 20:08:52.576366       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope"
	E1212 20:08:52.577333       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1212 20:08:52.739953       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope"
	E1212 20:08:52.740893       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1212 20:08:52.746990       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope"
	E1212 20:08:52.747841       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1212 20:08:52.790433       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope"
	E1212 20:08:52.791450       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1212 20:08:52.835630       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope"
	E1212 20:08:52.836543       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1212 20:08:52.887394       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1212 20:08:52.887454       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope"
	E1212 20:08:52.888358       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1212 20:08:52.888444       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1212 20:08:52.893598       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope"
	E1212 20:08:52.894329       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1212 20:08:52.904345       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope"
	E1212 20:08:52.905297       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1212 20:08:52.918080       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope"
	E1212 20:08:52.918882       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1212 20:08:52.925821       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope"
	E1212 20:08:52.926715       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1212 20:08:52.938674       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope"
	E1212 20:08:52.939559       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	I1212 20:08:54.754546       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 12 20:09:00 no-preload-753103 kubelet[2212]: I1212 20:09:00.246923    2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e9aeda8a-4980-4713-aeaf-72c392c221c8-lib-modules\") pod \"kube-proxy-xn425\" (UID: \"e9aeda8a-4980-4713-aeaf-72c392c221c8\") " pod="kube-system/kube-proxy-xn425"
	Dec 12 20:09:00 no-preload-753103 kubelet[2212]: I1212 20:09:00.246947    2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cde1edf5-2032-4960-96aa-39781736a4c4-xtables-lock\") pod \"kindnet-p4b57\" (UID: \"cde1edf5-2032-4960-96aa-39781736a4c4\") " pod="kube-system/kindnet-p4b57"
	Dec 12 20:09:00 no-preload-753103 kubelet[2212]: I1212 20:09:00.246970    2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cde1edf5-2032-4960-96aa-39781736a4c4-lib-modules\") pod \"kindnet-p4b57\" (UID: \"cde1edf5-2032-4960-96aa-39781736a4c4\") " pod="kube-system/kindnet-p4b57"
	Dec 12 20:09:00 no-preload-753103 kubelet[2212]: I1212 20:09:00.246991    2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf5m4\" (UniqueName: \"kubernetes.io/projected/e9aeda8a-4980-4713-aeaf-72c392c221c8-kube-api-access-mf5m4\") pod \"kube-proxy-xn425\" (UID: \"e9aeda8a-4980-4713-aeaf-72c392c221c8\") " pod="kube-system/kube-proxy-xn425"
	Dec 12 20:09:00 no-preload-753103 kubelet[2212]: E1212 20:09:00.546827    2212 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-753103" containerName="etcd"
	Dec 12 20:09:01 no-preload-753103 kubelet[2212]: I1212 20:09:01.468119    2212 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-xn425" podStartSLOduration=1.468099736 podStartE2EDuration="1.468099736s" podCreationTimestamp="2025-12-12 20:09:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 20:09:01.467945008 +0000 UTC m=+7.143158997" watchObservedRunningTime="2025-12-12 20:09:01.468099736 +0000 UTC m=+7.143313691"
	Dec 12 20:09:02 no-preload-753103 kubelet[2212]: E1212 20:09:02.931639    2212 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-753103" containerName="kube-controller-manager"
	Dec 12 20:09:02 no-preload-753103 kubelet[2212]: I1212 20:09:02.941225    2212 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-p4b57" podStartSLOduration=1.110402138 podStartE2EDuration="2.941207284s" podCreationTimestamp="2025-12-12 20:09:00 +0000 UTC" firstStartedPulling="2025-12-12 20:09:00.483389711 +0000 UTC m=+6.158603658" lastFinishedPulling="2025-12-12 20:09:02.314194869 +0000 UTC m=+7.989408804" observedRunningTime="2025-12-12 20:09:02.47255919 +0000 UTC m=+8.147773144" watchObservedRunningTime="2025-12-12 20:09:02.941207284 +0000 UTC m=+8.616421239"
	Dec 12 20:09:04 no-preload-753103 kubelet[2212]: E1212 20:09:04.713597    2212 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-753103" containerName="kube-scheduler"
	Dec 12 20:09:06 no-preload-753103 kubelet[2212]: E1212 20:09:06.086666    2212 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-753103" containerName="kube-apiserver"
	Dec 12 20:09:10 no-preload-753103 kubelet[2212]: E1212 20:09:10.548144    2212 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-753103" containerName="etcd"
	Dec 12 20:09:12 no-preload-753103 kubelet[2212]: E1212 20:09:12.936449    2212 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-753103" containerName="kube-controller-manager"
	Dec 12 20:09:13 no-preload-753103 kubelet[2212]: I1212 20:09:13.106057    2212 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 12 20:09:13 no-preload-753103 kubelet[2212]: I1212 20:09:13.240429    2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e682308a-054b-4838-85fd-f5925e146ee3-tmp\") pod \"storage-provisioner\" (UID: \"e682308a-054b-4838-85fd-f5925e146ee3\") " pod="kube-system/storage-provisioner"
	Dec 12 20:09:13 no-preload-753103 kubelet[2212]: I1212 20:09:13.240466    2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5jv8\" (UniqueName: \"kubernetes.io/projected/d3962c56-5385-4b85-b38e-85af8a8ac8ef-kube-api-access-k5jv8\") pod \"coredns-7d764666f9-pbqw6\" (UID: \"d3962c56-5385-4b85-b38e-85af8a8ac8ef\") " pod="kube-system/coredns-7d764666f9-pbqw6"
	Dec 12 20:09:13 no-preload-753103 kubelet[2212]: I1212 20:09:13.240488    2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs8c8\" (UniqueName: \"kubernetes.io/projected/e682308a-054b-4838-85fd-f5925e146ee3-kube-api-access-cs8c8\") pod \"storage-provisioner\" (UID: \"e682308a-054b-4838-85fd-f5925e146ee3\") " pod="kube-system/storage-provisioner"
	Dec 12 20:09:13 no-preload-753103 kubelet[2212]: I1212 20:09:13.240501    2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d3962c56-5385-4b85-b38e-85af8a8ac8ef-config-volume\") pod \"coredns-7d764666f9-pbqw6\" (UID: \"d3962c56-5385-4b85-b38e-85af8a8ac8ef\") " pod="kube-system/coredns-7d764666f9-pbqw6"
	Dec 12 20:09:14 no-preload-753103 kubelet[2212]: E1212 20:09:14.489091    2212 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-pbqw6" containerName="coredns"
	Dec 12 20:09:14 no-preload-753103 kubelet[2212]: I1212 20:09:14.499191    2212 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.499172917 podStartE2EDuration="14.499172917s" podCreationTimestamp="2025-12-12 20:09:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 20:09:13.493810111 +0000 UTC m=+19.169024065" watchObservedRunningTime="2025-12-12 20:09:14.499172917 +0000 UTC m=+20.174386868"
	Dec 12 20:09:14 no-preload-753103 kubelet[2212]: I1212 20:09:14.508380    2212 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-pbqw6" podStartSLOduration=14.508363906 podStartE2EDuration="14.508363906s" podCreationTimestamp="2025-12-12 20:09:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 20:09:14.498964336 +0000 UTC m=+20.174178290" watchObservedRunningTime="2025-12-12 20:09:14.508363906 +0000 UTC m=+20.183577860"
	Dec 12 20:09:14 no-preload-753103 kubelet[2212]: E1212 20:09:14.717889    2212 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-753103" containerName="kube-scheduler"
	Dec 12 20:09:15 no-preload-753103 kubelet[2212]: E1212 20:09:15.491651    2212 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-pbqw6" containerName="coredns"
	Dec 12 20:09:16 no-preload-753103 kubelet[2212]: I1212 20:09:16.460641    2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbb5c\" (UniqueName: \"kubernetes.io/projected/3b9946fe-7d9a-4087-960d-57c19ff595d9-kube-api-access-bbb5c\") pod \"busybox\" (UID: \"3b9946fe-7d9a-4087-960d-57c19ff595d9\") " pod="default/busybox"
	Dec 12 20:09:16 no-preload-753103 kubelet[2212]: E1212 20:09:16.493256    2212 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-pbqw6" containerName="coredns"
	Dec 12 20:09:17 no-preload-753103 kubelet[2212]: I1212 20:09:17.509395    2212 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.886170806 podStartE2EDuration="1.509376191s" podCreationTimestamp="2025-12-12 20:09:16 +0000 UTC" firstStartedPulling="2025-12-12 20:09:16.73463745 +0000 UTC m=+22.409851399" lastFinishedPulling="2025-12-12 20:09:17.357842833 +0000 UTC m=+23.033056784" observedRunningTime="2025-12-12 20:09:17.509071815 +0000 UTC m=+23.184285769" watchObservedRunningTime="2025-12-12 20:09:17.509376191 +0000 UTC m=+23.184590146"
	
	
	==> storage-provisioner [8276db3edc188b9feb4004786a5c6712f09c9b76e1c432f0076cc1c2fdec4b31] <==
	I1212 20:09:13.486608       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 20:09:13.494812       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 20:09:13.494875       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1212 20:09:13.496573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:09:13.500449       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1212 20:09:13.500597       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 20:09:13.500724       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-753103_5a752149-5b29-4b9e-b291-d33afaa2b895!
	I1212 20:09:13.500693       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0a4a2be5-48cb-4e82-81d6-ec5f27edd4fa", APIVersion:"v1", ResourceVersion:"459", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-753103_5a752149-5b29-4b9e-b291-d33afaa2b895 became leader
	W1212 20:09:13.502062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:09:13.506065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1212 20:09:13.601371       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-753103_5a752149-5b29-4b9e-b291-d33afaa2b895!
	W1212 20:09:15.509144       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:09:15.512864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:09:17.516297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:09:17.521936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:09:19.524512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:09:19.527934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:09:21.530465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:09:21.533906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:09:23.536649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:09:23.540195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:09:25.543454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:09:25.547593       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-753103 -n no-preload-753103
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-753103 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-824670 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-824670 --alsologtostderr -v=1: exit status 80 (1.887110292s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-824670 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:10:32.330231  290464 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:10:32.330422  290464 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:10:32.330434  290464 out.go:374] Setting ErrFile to fd 2...
	I1212 20:10:32.330440  290464 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:10:32.330709  290464 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 20:10:32.330958  290464 out.go:368] Setting JSON to false
	I1212 20:10:32.330977  290464 mustload.go:66] Loading cluster: old-k8s-version-824670
	I1212 20:10:32.331423  290464 config.go:182] Loaded profile config "old-k8s-version-824670": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1212 20:10:32.331844  290464 cli_runner.go:164] Run: docker container inspect old-k8s-version-824670 --format={{.State.Status}}
	I1212 20:10:32.352027  290464 host.go:66] Checking if "old-k8s-version-824670" exists ...
	I1212 20:10:32.352420  290464 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:10:32.420020  290464 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:76 OomKillDisable:false NGoroutines:92 SystemTime:2025-12-12 20:10:32.406778828 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 20:10:32.421250  290464 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22112/minikube-v1.37.0-1765505725-22112-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765505725-22112/minikube-v1.37.0-1765505725-22112-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765505725-22112-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-824670 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1212 20:10:32.423155  290464 out.go:179] * Pausing node old-k8s-version-824670 ... 
	I1212 20:10:32.424293  290464 host.go:66] Checking if "old-k8s-version-824670" exists ...
	I1212 20:10:32.424533  290464 ssh_runner.go:195] Run: systemctl --version
	I1212 20:10:32.424567  290464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-824670
	I1212 20:10:32.445503  290464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/old-k8s-version-824670/id_rsa Username:docker}
	I1212 20:10:32.543852  290464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:10:32.557021  290464 pause.go:52] kubelet running: true
	I1212 20:10:32.557124  290464 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 20:10:32.762146  290464 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 20:10:32.762243  290464 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 20:10:32.848473  290464 cri.go:89] found id: "d1429ec9716928945e4999afe1474e549e2f08279150e50974bc3e09ff1158bd"
	I1212 20:10:32.848500  290464 cri.go:89] found id: "5de41c6aa492fe5e76f33d9b2461f6010074bf5e9cecb2e7aaa01d47eff17b90"
	I1212 20:10:32.848507  290464 cri.go:89] found id: "2b7d114503f1e7c1f729cb4a9c42a05780b95035046d4e4ef6f086068d58d276"
	I1212 20:10:32.848513  290464 cri.go:89] found id: "791996c96c1570a58c5ea4f6aab56589666965c63b05afd3cf932d0e002d46bf"
	I1212 20:10:32.848517  290464 cri.go:89] found id: "74f2f8fee475f6c8156e8874b05736c6e859ff4488ac0eef026e65fab8b4755e"
	I1212 20:10:32.848523  290464 cri.go:89] found id: "9a180d91c2d49bf246e2537f6f6ec9383636af5bcd8e483965280f5e2ed16670"
	I1212 20:10:32.848527  290464 cri.go:89] found id: "849ed107c3cf4dafcf63a6f35cbf26763c3ee90be82e68800ff7025351783d38"
	I1212 20:10:32.848533  290464 cri.go:89] found id: "6eac65576e4bbc186c5b79d6f0aa97f5d5234fb637be74a5ce5b44491b28bb54"
	I1212 20:10:32.848538  290464 cri.go:89] found id: "956a9b5c70fa4a99806a77c8333552c74d1d682dc9e545864ecd4d6ae67331a9"
	I1212 20:10:32.848545  290464 cri.go:89] found id: "383f605274c473be6263ace19467c98d6a96fad804f4f87b75e17354e7e18b9d"
	I1212 20:10:32.848549  290464 cri.go:89] found id: "bf67adb0b535c4f80332ee7cbb048fb485b60b71283dd19817a84c6c40a9acf6"
	I1212 20:10:32.848552  290464 cri.go:89] found id: ""
	I1212 20:10:32.848586  290464 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 20:10:32.862361  290464 retry.go:31] will retry after 212.404399ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:10:32Z" level=error msg="open /run/runc: no such file or directory"
	I1212 20:10:33.075629  290464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:10:33.090153  290464 pause.go:52] kubelet running: false
	I1212 20:10:33.090211  290464 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 20:10:33.262299  290464 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 20:10:33.262440  290464 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 20:10:33.343873  290464 cri.go:89] found id: "d1429ec9716928945e4999afe1474e549e2f08279150e50974bc3e09ff1158bd"
	I1212 20:10:33.343892  290464 cri.go:89] found id: "5de41c6aa492fe5e76f33d9b2461f6010074bf5e9cecb2e7aaa01d47eff17b90"
	I1212 20:10:33.343898  290464 cri.go:89] found id: "2b7d114503f1e7c1f729cb4a9c42a05780b95035046d4e4ef6f086068d58d276"
	I1212 20:10:33.343903  290464 cri.go:89] found id: "791996c96c1570a58c5ea4f6aab56589666965c63b05afd3cf932d0e002d46bf"
	I1212 20:10:33.343908  290464 cri.go:89] found id: "74f2f8fee475f6c8156e8874b05736c6e859ff4488ac0eef026e65fab8b4755e"
	I1212 20:10:33.343914  290464 cri.go:89] found id: "9a180d91c2d49bf246e2537f6f6ec9383636af5bcd8e483965280f5e2ed16670"
	I1212 20:10:33.343918  290464 cri.go:89] found id: "849ed107c3cf4dafcf63a6f35cbf26763c3ee90be82e68800ff7025351783d38"
	I1212 20:10:33.343923  290464 cri.go:89] found id: "6eac65576e4bbc186c5b79d6f0aa97f5d5234fb637be74a5ce5b44491b28bb54"
	I1212 20:10:33.343928  290464 cri.go:89] found id: "956a9b5c70fa4a99806a77c8333552c74d1d682dc9e545864ecd4d6ae67331a9"
	I1212 20:10:33.343937  290464 cri.go:89] found id: "383f605274c473be6263ace19467c98d6a96fad804f4f87b75e17354e7e18b9d"
	I1212 20:10:33.343944  290464 cri.go:89] found id: "bf67adb0b535c4f80332ee7cbb048fb485b60b71283dd19817a84c6c40a9acf6"
	I1212 20:10:33.343949  290464 cri.go:89] found id: ""
	I1212 20:10:33.343995  290464 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 20:10:33.357639  290464 retry.go:31] will retry after 274.350203ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:10:33Z" level=error msg="open /run/runc: no such file or directory"
	I1212 20:10:33.632910  290464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:10:33.646133  290464 pause.go:52] kubelet running: false
	I1212 20:10:33.646189  290464 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 20:10:33.801066  290464 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 20:10:33.801154  290464 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 20:10:33.872765  290464 cri.go:89] found id: "d1429ec9716928945e4999afe1474e549e2f08279150e50974bc3e09ff1158bd"
	I1212 20:10:33.872785  290464 cri.go:89] found id: "5de41c6aa492fe5e76f33d9b2461f6010074bf5e9cecb2e7aaa01d47eff17b90"
	I1212 20:10:33.872792  290464 cri.go:89] found id: "2b7d114503f1e7c1f729cb4a9c42a05780b95035046d4e4ef6f086068d58d276"
	I1212 20:10:33.872797  290464 cri.go:89] found id: "791996c96c1570a58c5ea4f6aab56589666965c63b05afd3cf932d0e002d46bf"
	I1212 20:10:33.872800  290464 cri.go:89] found id: "74f2f8fee475f6c8156e8874b05736c6e859ff4488ac0eef026e65fab8b4755e"
	I1212 20:10:33.872803  290464 cri.go:89] found id: "9a180d91c2d49bf246e2537f6f6ec9383636af5bcd8e483965280f5e2ed16670"
	I1212 20:10:33.872806  290464 cri.go:89] found id: "849ed107c3cf4dafcf63a6f35cbf26763c3ee90be82e68800ff7025351783d38"
	I1212 20:10:33.872808  290464 cri.go:89] found id: "6eac65576e4bbc186c5b79d6f0aa97f5d5234fb637be74a5ce5b44491b28bb54"
	I1212 20:10:33.872811  290464 cri.go:89] found id: "956a9b5c70fa4a99806a77c8333552c74d1d682dc9e545864ecd4d6ae67331a9"
	I1212 20:10:33.872817  290464 cri.go:89] found id: "383f605274c473be6263ace19467c98d6a96fad804f4f87b75e17354e7e18b9d"
	I1212 20:10:33.872820  290464 cri.go:89] found id: "bf67adb0b535c4f80332ee7cbb048fb485b60b71283dd19817a84c6c40a9acf6"
	I1212 20:10:33.872823  290464 cri.go:89] found id: ""
	I1212 20:10:33.872859  290464 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 20:10:33.920752  290464 out.go:203] 
	W1212 20:10:34.034481  290464 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:10:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:10:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 20:10:34.034511  290464 out.go:285] * 
	* 
	W1212 20:10:34.049634  290464 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:10:34.069644  290464 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-824670 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-824670
helpers_test.go:244: (dbg) docker inspect old-k8s-version-824670:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5ab927c640d07c092293740e22de0c0f3921727ef65e3c1cdd16508a3906f60f",
	        "Created": "2025-12-12T20:08:24.734370557Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 278223,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T20:09:35.041301945Z",
	            "FinishedAt": "2025-12-12T20:09:34.234827669Z"
	        },
	        "Image": "sha256:1ca69fb46d667873e297ff8975852d6be818eb624529e750e2b09ff0d3b0c367",
	        "ResolvConfPath": "/var/lib/docker/containers/5ab927c640d07c092293740e22de0c0f3921727ef65e3c1cdd16508a3906f60f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5ab927c640d07c092293740e22de0c0f3921727ef65e3c1cdd16508a3906f60f/hostname",
	        "HostsPath": "/var/lib/docker/containers/5ab927c640d07c092293740e22de0c0f3921727ef65e3c1cdd16508a3906f60f/hosts",
	        "LogPath": "/var/lib/docker/containers/5ab927c640d07c092293740e22de0c0f3921727ef65e3c1cdd16508a3906f60f/5ab927c640d07c092293740e22de0c0f3921727ef65e3c1cdd16508a3906f60f-json.log",
	        "Name": "/old-k8s-version-824670",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-824670:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-824670",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5ab927c640d07c092293740e22de0c0f3921727ef65e3c1cdd16508a3906f60f",
	                "LowerDir": "/var/lib/docker/overlay2/30c86b0c2116c0f48f8210ea61a7592baf17dd790ff6789c3f29325d1db1d409-init/diff:/var/lib/docker/overlay2/87d8f1d6be832394c20ec55eb084b3f4a084ca786856584bd9e90cdb45432d3c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/30c86b0c2116c0f48f8210ea61a7592baf17dd790ff6789c3f29325d1db1d409/merged",
	                "UpperDir": "/var/lib/docker/overlay2/30c86b0c2116c0f48f8210ea61a7592baf17dd790ff6789c3f29325d1db1d409/diff",
	                "WorkDir": "/var/lib/docker/overlay2/30c86b0c2116c0f48f8210ea61a7592baf17dd790ff6789c3f29325d1db1d409/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-824670",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-824670/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-824670",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-824670",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-824670",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c1743c2c5c4a69caa9bb214fc5176e0ae400b8e95bfca84d34e9165abc31fb47",
	            "SandboxKey": "/var/run/docker/netns/c1743c2c5c4a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-824670": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "54eba6dc9ad901e89d943167287789d7ba6943774fa37cc0a202f7a86e0bfc9a",
	                    "EndpointID": "bc472e2797f41014170966e3081b872df44a837f122a80f63a7242dff4a1c896",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "62:80:cf:65:55:59",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-824670",
	                        "5ab927c640d0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-824670 -n old-k8s-version-824670
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-824670 -n old-k8s-version-824670: exit status 2 (346.677483ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-824670 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-824670 logs -n 25: (1.6095566s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p pause-243084                                                                                                                                                                                                                               │ pause-243084                 │ jenkins │ v1.37.0 │ 12 Dec 25 20:05 UTC │ 12 Dec 25 20:05 UTC │
	│ start   │ -p stopped-upgrade-180826 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                                                                                                                          │ stopped-upgrade-180826       │ jenkins │ v1.35.0 │ 12 Dec 25 20:05 UTC │ 12 Dec 25 20:06 UTC │
	│ stop    │ -p kubernetes-upgrade-991615                                                                                                                                                                                                                  │ kubernetes-upgrade-991615    │ jenkins │ v1.37.0 │ 12 Dec 25 20:05 UTC │ 12 Dec 25 20:06 UTC │
	│ stop    │ stopped-upgrade-180826 stop                                                                                                                                                                                                                   │ stopped-upgrade-180826       │ jenkins │ v1.35.0 │ 12 Dec 25 20:06 UTC │ 12 Dec 25 20:06 UTC │
	│ start   │ -p stopped-upgrade-180826 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ stopped-upgrade-180826       │ jenkins │ v1.37.0 │ 12 Dec 25 20:06 UTC │ 12 Dec 25 20:10 UTC │
	│ start   │ -p kubernetes-upgrade-991615 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                               │ kubernetes-upgrade-991615    │ jenkins │ v1.37.0 │ 12 Dec 25 20:06 UTC │ 12 Dec 25 20:10 UTC │
	│ delete  │ -p running-upgrade-569692                                                                                                                                                                                                                     │ running-upgrade-569692       │ jenkins │ v1.37.0 │ 12 Dec 25 20:08 UTC │ 12 Dec 25 20:08 UTC │
	│ start   │ -p old-k8s-version-824670 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:08 UTC │ 12 Dec 25 20:09 UTC │
	│ start   │ -p cert-expiration-070436 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-070436       │ jenkins │ v1.37.0 │ 12 Dec 25 20:08 UTC │ 12 Dec 25 20:08 UTC │
	│ delete  │ -p cert-expiration-070436                                                                                                                                                                                                                     │ cert-expiration-070436       │ jenkins │ v1.37.0 │ 12 Dec 25 20:08 UTC │ 12 Dec 25 20:08 UTC │
	│ start   │ -p no-preload-753103 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:08 UTC │ 12 Dec 25 20:09 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-824670 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │                     │
	│ stop    │ -p old-k8s-version-824670 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:09 UTC │
	│ addons  │ enable metrics-server -p no-preload-753103 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │                     │
	│ stop    │ -p no-preload-753103 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:09 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-824670 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:09 UTC │
	│ start   │ -p old-k8s-version-824670 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:10 UTC │
	│ addons  │ enable dashboard -p no-preload-753103 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:09 UTC │
	│ start   │ -p no-preload-753103 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:10 UTC │
	│ delete  │ -p stopped-upgrade-180826                                                                                                                                                                                                                     │ stopped-upgrade-180826       │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ start   │ -p kubernetes-upgrade-991615 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-991615    │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ start   │ -p kubernetes-upgrade-991615 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                               │ kubernetes-upgrade-991615    │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ start   │ -p default-k8s-diff-port-433034 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-433034 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ image   │ old-k8s-version-824670 image list --format=json                                                                                                                                                                                               │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ pause   │ -p old-k8s-version-824670 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:10:31
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:10:31.092369  289770 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:10:31.092499  289770 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:10:31.092510  289770 out.go:374] Setting ErrFile to fd 2...
	I1212 20:10:31.092517  289770 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:10:31.092742  289770 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 20:10:31.093222  289770 out.go:368] Setting JSON to false
	I1212 20:10:31.094372  289770 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3178,"bootTime":1765567053,"procs":307,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 20:10:31.094425  289770 start.go:143] virtualization: kvm guest
	I1212 20:10:31.097386  289770 out.go:179] * [default-k8s-diff-port-433034] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 20:10:31.098577  289770 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:10:31.098566  289770 notify.go:221] Checking for updates...
	I1212 20:10:31.099705  289770 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:10:31.101204  289770 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 20:10:31.103294  289770 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-5703/.minikube
	I1212 20:10:31.104421  289770 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 20:10:31.105708  289770 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:10:31.107355  289770 config.go:182] Loaded profile config "kubernetes-upgrade-991615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:10:31.107500  289770 config.go:182] Loaded profile config "no-preload-753103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:10:31.107606  289770 config.go:182] Loaded profile config "old-k8s-version-824670": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1212 20:10:31.107711  289770 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:10:31.132806  289770 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1212 20:10:31.132895  289770 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:10:31.191322  289770 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-12 20:10:31.180809814 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 20:10:31.191449  289770 docker.go:319] overlay module found
	I1212 20:10:31.192867  289770 out.go:179] * Using the docker driver based on user configuration
	I1212 20:10:31.193769  289770 start.go:309] selected driver: docker
	I1212 20:10:31.193781  289770 start.go:927] validating driver "docker" against <nil>
	I1212 20:10:31.193793  289770 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:10:31.194404  289770 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:10:31.251873  289770 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-12 20:10:31.242457065 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 20:10:31.252077  289770 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 20:10:31.252367  289770 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 20:10:31.254044  289770 out.go:179] * Using Docker driver with root privileges
	I1212 20:10:31.255130  289770 cni.go:84] Creating CNI manager for ""
	I1212 20:10:31.255218  289770 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:10:31.255232  289770 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 20:10:31.255332  289770 start.go:353] cluster config:
	{Name:default-k8s-diff-port-433034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-433034 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:10:31.256590  289770 out.go:179] * Starting "default-k8s-diff-port-433034" primary control-plane node in "default-k8s-diff-port-433034" cluster
	I1212 20:10:31.257734  289770 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 20:10:31.258833  289770 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 20:10:31.259802  289770 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:10:31.259832  289770 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1212 20:10:31.259840  289770 cache.go:65] Caching tarball of preloaded images
	I1212 20:10:31.259908  289770 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 20:10:31.259936  289770 preload.go:238] Found /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 20:10:31.259947  289770 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 20:10:31.260068  289770 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/config.json ...
	I1212 20:10:31.260094  289770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/config.json: {Name:mk2c21e68b4efac900e806b240e66ee91e145ac6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:31.282021  289770 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 20:10:31.282040  289770 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 20:10:31.282057  289770 cache.go:243] Successfully downloaded all kic artifacts
	I1212 20:10:31.282099  289770 start.go:360] acquireMachinesLock for default-k8s-diff-port-433034: {Name:mke664e0cef6403e9169218e4c6b7e74b7d0b1f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:10:31.282209  289770 start.go:364] duration metric: took 89.035µs to acquireMachinesLock for "default-k8s-diff-port-433034"
	I1212 20:10:31.282240  289770 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-433034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-433034 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:10:31.282349  289770 start.go:125] createHost starting for "" (driver="docker")
	W1212 20:10:31.281973  281225 pod_ready.go:104] pod "coredns-7d764666f9-pbqw6" is not "Ready", error: <nil>
	I1212 20:10:31.781373  281225 pod_ready.go:94] pod "coredns-7d764666f9-pbqw6" is "Ready"
	I1212 20:10:31.781404  281225 pod_ready.go:86] duration metric: took 36.505782081s for pod "coredns-7d764666f9-pbqw6" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:10:31.783899  281225 pod_ready.go:83] waiting for pod "etcd-no-preload-753103" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:10:31.788121  281225 pod_ready.go:94] pod "etcd-no-preload-753103" is "Ready"
	I1212 20:10:31.788165  281225 pod_ready.go:86] duration metric: took 4.243189ms for pod "etcd-no-preload-753103" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:10:31.790258  281225 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-753103" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:10:31.794394  281225 pod_ready.go:94] pod "kube-apiserver-no-preload-753103" is "Ready"
	I1212 20:10:31.794415  281225 pod_ready.go:86] duration metric: took 4.116838ms for pod "kube-apiserver-no-preload-753103" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:10:31.796266  281225 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-753103" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:10:31.980941  281225 pod_ready.go:94] pod "kube-controller-manager-no-preload-753103" is "Ready"
	I1212 20:10:31.980968  281225 pod_ready.go:86] duration metric: took 184.654563ms for pod "kube-controller-manager-no-preload-753103" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:10:32.180572  281225 pod_ready.go:83] waiting for pod "kube-proxy-xn425" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:10:32.579774  281225 pod_ready.go:94] pod "kube-proxy-xn425" is "Ready"
	I1212 20:10:32.579798  281225 pod_ready.go:86] duration metric: took 399.015848ms for pod "kube-proxy-xn425" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:10:32.781895  281225 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-753103" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:10:33.179177  281225 pod_ready.go:94] pod "kube-scheduler-no-preload-753103" is "Ready"
	I1212 20:10:33.179205  281225 pod_ready.go:86] duration metric: took 397.281172ms for pod "kube-scheduler-no-preload-753103" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:10:33.179218  281225 pod_ready.go:40] duration metric: took 37.906240265s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 20:10:33.226381  281225 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1212 20:10:33.231196  281225 out.go:179] * Done! kubectl is now configured to use "no-preload-753103" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 12 20:10:03 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:03.130016878Z" level=info msg="Started container" PID=1740 containerID=ff61be8c0d949c0126ec6ce146b85226100e94bf6dcf66f97fff1bb2a3e045f6 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lxjjc/dashboard-metrics-scraper id=26910298-1bd6-4052-bdc1-611f74d1b2fe name=/runtime.v1.RuntimeService/StartContainer sandboxID=faf9c19dd039b2d54b154af0c4134eea965eb191717a655a8e06b6a8444af64c
	Dec 12 20:10:04 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:04.082688548Z" level=info msg="Removing container: 2b7485f50c717e793408ba234982a40874bcd761bb998b74dc2494c3641d8f93" id=4bb008d1-b4c6-460a-ba15-977974117b4c name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 20:10:04 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:04.096835685Z" level=info msg="Removed container 2b7485f50c717e793408ba234982a40874bcd761bb998b74dc2494c3641d8f93: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lxjjc/dashboard-metrics-scraper" id=4bb008d1-b4c6-460a-ba15-977974117b4c name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 20:10:15 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:15.106110221Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=13966bb1-0a08-4777-bee2-5ec74f5e4959 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:10:15 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:15.106955903Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e8362b3f-cc65-42fc-b824-6b562bdc3028 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:10:15 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:15.107849596Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=bc28dcb2-3695-4859-a6c9-93766ad5a985 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:10:15 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:15.107984404Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:10:15 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:15.112114053Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:10:15 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:15.112304116Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/21fcf637adacbadae04bd7d52de28d9253864fa1364fe6be3915c6f716cee54e/merged/etc/passwd: no such file or directory"
	Dec 12 20:10:15 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:15.112340304Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/21fcf637adacbadae04bd7d52de28d9253864fa1364fe6be3915c6f716cee54e/merged/etc/group: no such file or directory"
	Dec 12 20:10:15 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:15.112592562Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:10:15 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:15.137573134Z" level=info msg="Created container d1429ec9716928945e4999afe1474e549e2f08279150e50974bc3e09ff1158bd: kube-system/storage-provisioner/storage-provisioner" id=bc28dcb2-3695-4859-a6c9-93766ad5a985 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:10:15 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:15.138071637Z" level=info msg="Starting container: d1429ec9716928945e4999afe1474e549e2f08279150e50974bc3e09ff1158bd" id=e625601e-e604-4b6b-8501-3dfb2ac0675d name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 20:10:15 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:15.139783271Z" level=info msg="Started container" PID=1758 containerID=d1429ec9716928945e4999afe1474e549e2f08279150e50974bc3e09ff1158bd description=kube-system/storage-provisioner/storage-provisioner id=e625601e-e604-4b6b-8501-3dfb2ac0675d name=/runtime.v1.RuntimeService/StartContainer sandboxID=8dd4bd5ac9d48ec6c4056c64283b06d745457f1702547c25126e2f9e5730328e
	Dec 12 20:10:21 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:21.002748405Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=7d49aaac-dd14-4bfb-942a-adada94382b0 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:10:21 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:21.003657357Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=85c815e3-57f9-4348-a70a-0ab9b198edbb name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:10:21 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:21.004571024Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lxjjc/dashboard-metrics-scraper" id=a6e722d4-8c5a-4781-aa0c-99c9afc886e8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:10:21 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:21.004682342Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:10:21 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:21.011420333Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:10:21 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:21.011869857Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:10:21 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:21.04448839Z" level=info msg="Created container 383f605274c473be6263ace19467c98d6a96fad804f4f87b75e17354e7e18b9d: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lxjjc/dashboard-metrics-scraper" id=a6e722d4-8c5a-4781-aa0c-99c9afc886e8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:10:21 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:21.044925537Z" level=info msg="Starting container: 383f605274c473be6263ace19467c98d6a96fad804f4f87b75e17354e7e18b9d" id=4609af78-20a8-4dba-93d2-278d85f8dcc0 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 20:10:21 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:21.046835912Z" level=info msg="Started container" PID=1795 containerID=383f605274c473be6263ace19467c98d6a96fad804f4f87b75e17354e7e18b9d description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lxjjc/dashboard-metrics-scraper id=4609af78-20a8-4dba-93d2-278d85f8dcc0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=faf9c19dd039b2d54b154af0c4134eea965eb191717a655a8e06b6a8444af64c
	Dec 12 20:10:21 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:21.12360397Z" level=info msg="Removing container: ff61be8c0d949c0126ec6ce146b85226100e94bf6dcf66f97fff1bb2a3e045f6" id=dc810208-ccda-4a81-a3b3-47b63825e644 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 20:10:21 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:21.1320909Z" level=info msg="Removed container ff61be8c0d949c0126ec6ce146b85226100e94bf6dcf66f97fff1bb2a3e045f6: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lxjjc/dashboard-metrics-scraper" id=dc810208-ccda-4a81-a3b3-47b63825e644 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	383f605274c47       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           14 seconds ago      Exited              dashboard-metrics-scraper   2                   faf9c19dd039b       dashboard-metrics-scraper-5f989dc9cf-lxjjc       kubernetes-dashboard
	d1429ec971692       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   8dd4bd5ac9d48       storage-provisioner                              kube-system
	bf67adb0b535c       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   34 seconds ago      Running             kubernetes-dashboard        0                   bee0c99eecce1       kubernetes-dashboard-8694d4445c-8xmbb            kubernetes-dashboard
	5de41c6aa492f       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           51 seconds ago      Running             coredns                     0                   86af7d39f8a88       coredns-5dd5756b68-shgbw                         kube-system
	c53a92a4cffb4       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   0744b65fc0764       busybox                                          default
	2b7d114503f1e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           51 seconds ago      Running             kindnet-cni                 0                   0fa0cfb1fd5c0       kindnet-75qr9                                    kube-system
	791996c96c157       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           51 seconds ago      Running             kube-proxy                  0                   93c8b346fbedd       kube-proxy-nwrgl                                 kube-system
	74f2f8fee475f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   8dd4bd5ac9d48       storage-provisioner                              kube-system
	9a180d91c2d49       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           54 seconds ago      Running             kube-controller-manager     0                   7bc59168f21f1       kube-controller-manager-old-k8s-version-824670   kube-system
	849ed107c3cf4       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           54 seconds ago      Running             kube-scheduler              0                   3ff6a54bd9feb       kube-scheduler-old-k8s-version-824670            kube-system
	6eac65576e4bb       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           54 seconds ago      Running             etcd                        0                   8ea6b12444176       etcd-old-k8s-version-824670                      kube-system
	956a9b5c70fa4       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           54 seconds ago      Running             kube-apiserver              0                   627729fb077e5       kube-apiserver-old-k8s-version-824670            kube-system
	
	
	==> coredns [5de41c6aa492fe5e76f33d9b2461f6010074bf5e9cecb2e7aaa01d47eff17b90] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:43558 - 57662 "HINFO IN 4623689995966292170.478716781200727525. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.066209008s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-824670
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-824670
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300
	                    minikube.k8s.io/name=old-k8s-version-824670
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T20_08_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 20:08:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-824670
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 20:10:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 20:10:14 +0000   Fri, 12 Dec 2025 20:08:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 20:10:14 +0000   Fri, 12 Dec 2025 20:08:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 20:10:14 +0000   Fri, 12 Dec 2025 20:08:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 20:10:14 +0000   Fri, 12 Dec 2025 20:09:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-824670
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c9831a2813dcaeb3cc17b596693b7bac
	  System UUID:                1fb8fe54-c4b9-4491-b301-c9b4220778ba
	  Boot ID:                    50e046d9-33b0-4107-918f-f0a8d9513b10
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 coredns-5dd5756b68-shgbw                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     102s
	  kube-system                 etcd-old-k8s-version-824670                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         114s
	  kube-system                 kindnet-75qr9                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      102s
	  kube-system                 kube-apiserver-old-k8s-version-824670             250m (3%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-old-k8s-version-824670    200m (2%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-nwrgl                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 kube-scheduler-old-k8s-version-824670             100m (1%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-lxjjc        0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-8xmbb             0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 101s               kube-proxy       
	  Normal  Starting                 51s                kube-proxy       
	  Normal  NodeHasSufficientMemory  114s               kubelet          Node old-k8s-version-824670 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    114s               kubelet          Node old-k8s-version-824670 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     114s               kubelet          Node old-k8s-version-824670 status is now: NodeHasSufficientPID
	  Normal  Starting                 114s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           102s               node-controller  Node old-k8s-version-824670 event: Registered Node old-k8s-version-824670 in Controller
	  Normal  NodeReady                89s                kubelet          Node old-k8s-version-824670 status is now: NodeReady
	  Normal  Starting                 55s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 55s)  kubelet          Node old-k8s-version-824670 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 55s)  kubelet          Node old-k8s-version-824670 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 55s)  kubelet          Node old-k8s-version-824670 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           39s                node-controller  Node old-k8s-version-824670 event: Registered Node old-k8s-version-824670 in Controller
	
	
	==> dmesg <==
	[  +0.086327] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026088] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.855140] kauditd_printk_skb: 47 callbacks suppressed
	[Dec12 19:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.016395] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023890] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023861] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023912] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023894] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +2.047754] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +4.031589] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +8.448131] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[Dec12 19:32] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[ +32.252714] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	
	
	==> etcd [6eac65576e4bbc186c5b79d6f0aa97f5d5234fb637be74a5ce5b44491b28bb54] <==
	{"level":"info","ts":"2025-12-12T20:09:41.548857Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-12T20:09:41.548929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 switched to configuration voters=(16125559238023404339)"}
	{"level":"info","ts":"2025-12-12T20:09:41.549013Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"]}
	{"level":"info","ts":"2025-12-12T20:09:41.549134Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-12T20:09:41.549176Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-12T20:09:41.551131Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-12T20:09:41.551207Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-12-12T20:09:41.55127Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-12-12T20:09:41.551525Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-12T20:09:41.551577Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-12T20:09:42.640383Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-12T20:09:42.640445Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-12T20:09:42.640461Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-12-12T20:09:42.640475Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2025-12-12T20:09:42.64048Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-12-12T20:09:42.640488Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2025-12-12T20:09:42.640494Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-12-12T20:09:42.641501Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-824670 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-12T20:09:42.641515Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-12T20:09:42.641538Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-12T20:09:42.641693Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-12T20:09:42.641717Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-12T20:09:42.642725Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-12T20:09:42.642764Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2025-12-12T20:10:34.723687Z","caller":"traceutil/trace.go:171","msg":"trace[1418872379] transaction","detail":"{read_only:false; response_revision:629; number_of_response:1; }","duration":"149.717092ms","start":"2025-12-12T20:10:34.573942Z","end":"2025-12-12T20:10:34.723659Z","steps":["trace[1418872379] 'process raft request'  (duration: 149.584404ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:10:35 up 53 min,  0 user,  load average: 2.85, 1.92, 1.46
	Linux old-k8s-version-824670 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2b7d114503f1e7c1f729cb4a9c42a05780b95035046d4e4ef6f086068d58d276] <==
	I1212 20:09:44.554001       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1212 20:09:44.577311       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1212 20:09:44.577438       1 main.go:148] setting mtu 1500 for CNI 
	I1212 20:09:44.577456       1 main.go:178] kindnetd IP family: "ipv4"
	I1212 20:09:44.577497       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-12T20:09:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1212 20:09:44.878786       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1212 20:09:44.949263       1 controller.go:381] "Waiting for informer caches to sync"
	I1212 20:09:44.949331       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1212 20:09:44.949530       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1212 20:09:45.177512       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1212 20:09:45.177544       1 metrics.go:72] Registering metrics
	I1212 20:09:45.177603       1 controller.go:711] "Syncing nftables rules"
	I1212 20:09:54.878515       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1212 20:09:54.878561       1 main.go:301] handling current node
	I1212 20:10:04.878538       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1212 20:10:04.878594       1 main.go:301] handling current node
	I1212 20:10:14.878518       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1212 20:10:14.878569       1 main.go:301] handling current node
	I1212 20:10:24.880407       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1212 20:10:24.880448       1 main.go:301] handling current node
	I1212 20:10:34.883298       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1212 20:10:34.883340       1 main.go:301] handling current node
	
	
	==> kube-apiserver [956a9b5c70fa4a99806a77c8333552c74d1d682dc9e545864ecd4d6ae67331a9] <==
	I1212 20:09:43.537152       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I1212 20:09:43.558646       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 20:09:43.602088       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1212 20:09:43.608900       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1212 20:09:43.608920       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1212 20:09:43.609217       1 shared_informer.go:318] Caches are synced for configmaps
	I1212 20:09:43.609257       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1212 20:09:43.609293       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1212 20:09:43.609430       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1212 20:09:43.637983       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1212 20:09:43.638025       1 aggregator.go:166] initial CRD sync complete...
	I1212 20:09:43.638034       1 autoregister_controller.go:141] Starting autoregister controller
	I1212 20:09:43.638041       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 20:09:43.638049       1 cache.go:39] Caches are synced for autoregister controller
	I1212 20:09:44.454883       1 controller.go:624] quota admission added evaluator for: namespaces
	I1212 20:09:44.482837       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1212 20:09:44.498842       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 20:09:44.505737       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 20:09:44.512060       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 20:09:44.513180       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1212 20:09:44.549345       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.147.212"}
	I1212 20:09:44.566441       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.185.64"}
	I1212 20:09:56.133785       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1212 20:09:56.179921       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 20:09:56.186818       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [9a180d91c2d49bf246e2537f6f6ec9383636af5bcd8e483965280f5e2ed16670] <==
	I1212 20:09:56.193058       1 shared_informer.go:318] Caches are synced for persistent volume
	I1212 20:09:56.199483       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.981849ms"
	I1212 20:09:56.199567       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="48.351µs"
	I1212 20:09:56.200991       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-8xmbb"
	I1212 20:09:56.201017       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-lxjjc"
	I1212 20:09:56.205872       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="18.353848ms"
	I1212 20:09:56.209376       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="21.77357ms"
	I1212 20:09:56.211268       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="5.353611ms"
	I1212 20:09:56.211367       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="42.156µs"
	I1212 20:09:56.213842       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="4.4255ms"
	I1212 20:09:56.213928       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="43.883µs"
	I1212 20:09:56.216026       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="34.343µs"
	I1212 20:09:56.224597       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="42.78µs"
	I1212 20:09:56.506602       1 shared_informer.go:318] Caches are synced for garbage collector
	I1212 20:09:56.583971       1 shared_informer.go:318] Caches are synced for garbage collector
	I1212 20:09:56.583999       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1212 20:10:01.097503       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.621803ms"
	I1212 20:10:01.097693       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="70.848µs"
	I1212 20:10:03.089460       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="56.162µs"
	I1212 20:10:04.095464       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="70.372µs"
	I1212 20:10:05.093947       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="103.445µs"
	I1212 20:10:18.041749       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.227449ms"
	I1212 20:10:18.041886       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.452µs"
	I1212 20:10:21.133883       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="94.187µs"
	I1212 20:10:26.519641       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="74.978µs"
	
	
	==> kube-proxy [791996c96c1570a58c5ea4f6aab56589666965c63b05afd3cf932d0e002d46bf] <==
	I1212 20:09:44.409382       1 server_others.go:69] "Using iptables proxy"
	I1212 20:09:44.420252       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1212 20:09:44.442025       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 20:09:44.444626       1 server_others.go:152] "Using iptables Proxier"
	I1212 20:09:44.444658       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1212 20:09:44.444668       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1212 20:09:44.444702       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 20:09:44.444941       1 server.go:846] "Version info" version="v1.28.0"
	I1212 20:09:44.444954       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 20:09:44.445734       1 config.go:97] "Starting endpoint slice config controller"
	I1212 20:09:44.445773       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 20:09:44.445807       1 config.go:188] "Starting service config controller"
	I1212 20:09:44.445812       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 20:09:44.446030       1 config.go:315] "Starting node config controller"
	I1212 20:09:44.446049       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 20:09:44.547542       1 shared_informer.go:318] Caches are synced for node config
	I1212 20:09:44.547551       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1212 20:09:44.547587       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [849ed107c3cf4dafcf63a6f35cbf26763c3ee90be82e68800ff7025351783d38] <==
	I1212 20:09:42.085417       1 serving.go:348] Generated self-signed cert in-memory
	W1212 20:09:43.535721       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1212 20:09:43.535828       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 20:09:43.535844       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1212 20:09:43.535854       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1212 20:09:43.563197       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1212 20:09:43.566326       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 20:09:43.569413       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 20:09:43.569474       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 20:09:43.572364       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1212 20:09:43.572433       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1212 20:09:43.669988       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 12 20:09:56 old-k8s-version-824670 kubelet[736]: I1212 20:09:56.208981     736 topology_manager.go:215] "Topology Admit Handler" podUID="85485575-d55f-4968-9740-35c3df94662b" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-8xmbb"
	Dec 12 20:09:56 old-k8s-version-824670 kubelet[736]: I1212 20:09:56.303727     736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6jtr\" (UniqueName: \"kubernetes.io/projected/e0d619f1-f49a-4034-a2f4-51b1cdcaae11-kube-api-access-k6jtr\") pod \"dashboard-metrics-scraper-5f989dc9cf-lxjjc\" (UID: \"e0d619f1-f49a-4034-a2f4-51b1cdcaae11\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lxjjc"
	Dec 12 20:09:56 old-k8s-version-824670 kubelet[736]: I1212 20:09:56.303772     736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/85485575-d55f-4968-9740-35c3df94662b-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-8xmbb\" (UID: \"85485575-d55f-4968-9740-35c3df94662b\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-8xmbb"
	Dec 12 20:09:56 old-k8s-version-824670 kubelet[736]: I1212 20:09:56.303798     736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5d2g\" (UniqueName: \"kubernetes.io/projected/85485575-d55f-4968-9740-35c3df94662b-kube-api-access-d5d2g\") pod \"kubernetes-dashboard-8694d4445c-8xmbb\" (UID: \"85485575-d55f-4968-9740-35c3df94662b\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-8xmbb"
	Dec 12 20:09:56 old-k8s-version-824670 kubelet[736]: I1212 20:09:56.303887     736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e0d619f1-f49a-4034-a2f4-51b1cdcaae11-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-lxjjc\" (UID: \"e0d619f1-f49a-4034-a2f4-51b1cdcaae11\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lxjjc"
	Dec 12 20:10:01 old-k8s-version-824670 kubelet[736]: I1212 20:10:01.083376     736 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-8xmbb" podStartSLOduration=0.965248545 podCreationTimestamp="2025-12-12 20:09:56 +0000 UTC" firstStartedPulling="2025-12-12 20:09:56.531576443 +0000 UTC m=+15.612903510" lastFinishedPulling="2025-12-12 20:10:00.649640146 +0000 UTC m=+19.730967217" observedRunningTime="2025-12-12 20:10:01.082124109 +0000 UTC m=+20.163451180" watchObservedRunningTime="2025-12-12 20:10:01.083312252 +0000 UTC m=+20.164639326"
	Dec 12 20:10:03 old-k8s-version-824670 kubelet[736]: I1212 20:10:03.076295     736 scope.go:117] "RemoveContainer" containerID="2b7485f50c717e793408ba234982a40874bcd761bb998b74dc2494c3641d8f93"
	Dec 12 20:10:04 old-k8s-version-824670 kubelet[736]: I1212 20:10:04.081180     736 scope.go:117] "RemoveContainer" containerID="2b7485f50c717e793408ba234982a40874bcd761bb998b74dc2494c3641d8f93"
	Dec 12 20:10:04 old-k8s-version-824670 kubelet[736]: I1212 20:10:04.081422     736 scope.go:117] "RemoveContainer" containerID="ff61be8c0d949c0126ec6ce146b85226100e94bf6dcf66f97fff1bb2a3e045f6"
	Dec 12 20:10:04 old-k8s-version-824670 kubelet[736]: E1212 20:10:04.081799     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-lxjjc_kubernetes-dashboard(e0d619f1-f49a-4034-a2f4-51b1cdcaae11)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lxjjc" podUID="e0d619f1-f49a-4034-a2f4-51b1cdcaae11"
	Dec 12 20:10:05 old-k8s-version-824670 kubelet[736]: I1212 20:10:05.084950     736 scope.go:117] "RemoveContainer" containerID="ff61be8c0d949c0126ec6ce146b85226100e94bf6dcf66f97fff1bb2a3e045f6"
	Dec 12 20:10:05 old-k8s-version-824670 kubelet[736]: E1212 20:10:05.085293     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-lxjjc_kubernetes-dashboard(e0d619f1-f49a-4034-a2f4-51b1cdcaae11)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lxjjc" podUID="e0d619f1-f49a-4034-a2f4-51b1cdcaae11"
	Dec 12 20:10:06 old-k8s-version-824670 kubelet[736]: I1212 20:10:06.509216     736 scope.go:117] "RemoveContainer" containerID="ff61be8c0d949c0126ec6ce146b85226100e94bf6dcf66f97fff1bb2a3e045f6"
	Dec 12 20:10:06 old-k8s-version-824670 kubelet[736]: E1212 20:10:06.509528     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-lxjjc_kubernetes-dashboard(e0d619f1-f49a-4034-a2f4-51b1cdcaae11)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lxjjc" podUID="e0d619f1-f49a-4034-a2f4-51b1cdcaae11"
	Dec 12 20:10:15 old-k8s-version-824670 kubelet[736]: I1212 20:10:15.105737     736 scope.go:117] "RemoveContainer" containerID="74f2f8fee475f6c8156e8874b05736c6e859ff4488ac0eef026e65fab8b4755e"
	Dec 12 20:10:21 old-k8s-version-824670 kubelet[736]: I1212 20:10:21.002180     736 scope.go:117] "RemoveContainer" containerID="ff61be8c0d949c0126ec6ce146b85226100e94bf6dcf66f97fff1bb2a3e045f6"
	Dec 12 20:10:21 old-k8s-version-824670 kubelet[736]: I1212 20:10:21.122520     736 scope.go:117] "RemoveContainer" containerID="ff61be8c0d949c0126ec6ce146b85226100e94bf6dcf66f97fff1bb2a3e045f6"
	Dec 12 20:10:21 old-k8s-version-824670 kubelet[736]: I1212 20:10:21.122770     736 scope.go:117] "RemoveContainer" containerID="383f605274c473be6263ace19467c98d6a96fad804f4f87b75e17354e7e18b9d"
	Dec 12 20:10:21 old-k8s-version-824670 kubelet[736]: E1212 20:10:21.123139     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-lxjjc_kubernetes-dashboard(e0d619f1-f49a-4034-a2f4-51b1cdcaae11)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lxjjc" podUID="e0d619f1-f49a-4034-a2f4-51b1cdcaae11"
	Dec 12 20:10:26 old-k8s-version-824670 kubelet[736]: I1212 20:10:26.509034     736 scope.go:117] "RemoveContainer" containerID="383f605274c473be6263ace19467c98d6a96fad804f4f87b75e17354e7e18b9d"
	Dec 12 20:10:26 old-k8s-version-824670 kubelet[736]: E1212 20:10:26.509456     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-lxjjc_kubernetes-dashboard(e0d619f1-f49a-4034-a2f4-51b1cdcaae11)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lxjjc" podUID="e0d619f1-f49a-4034-a2f4-51b1cdcaae11"
	Dec 12 20:10:32 old-k8s-version-824670 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 12 20:10:32 old-k8s-version-824670 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 12 20:10:32 old-k8s-version-824670 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:10:32 old-k8s-version-824670 systemd[1]: kubelet.service: Consumed 1.417s CPU time.
	
	
	==> kubernetes-dashboard [bf67adb0b535c4f80332ee7cbb048fb485b60b71283dd19817a84c6c40a9acf6] <==
	2025/12/12 20:10:00 Using namespace: kubernetes-dashboard
	2025/12/12 20:10:00 Using in-cluster config to connect to apiserver
	2025/12/12 20:10:00 Using secret token for csrf signing
	2025/12/12 20:10:00 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/12 20:10:00 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/12 20:10:00 Successful initial request to the apiserver, version: v1.28.0
	2025/12/12 20:10:00 Generating JWE encryption key
	2025/12/12 20:10:00 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/12 20:10:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/12 20:10:01 Initializing JWE encryption key from synchronized object
	2025/12/12 20:10:01 Creating in-cluster Sidecar client
	2025/12/12 20:10:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/12 20:10:01 Serving insecurely on HTTP port: 9090
	2025/12/12 20:10:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/12 20:10:00 Starting overwatch
	
	
	==> storage-provisioner [74f2f8fee475f6c8156e8874b05736c6e859ff4488ac0eef026e65fab8b4755e] <==
	I1212 20:09:44.384386       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1212 20:10:14.387707       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [d1429ec9716928945e4999afe1474e549e2f08279150e50974bc3e09ff1158bd] <==
	I1212 20:10:15.151518       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 20:10:15.158699       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 20:10:15.158730       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 20:10:32.557683       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 20:10:32.557877       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-824670_8ad85e3b-a8c0-4324-8ac8-d350da61f618!
	I1212 20:10:32.557878       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f695fbd6-0ef5-496c-8640-6e2ff454cd84", APIVersion:"v1", ResourceVersion:"627", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-824670_8ad85e3b-a8c0-4324-8ac8-d350da61f618 became leader
	I1212 20:10:32.658032       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-824670_8ad85e3b-a8c0-4324-8ac8-d350da61f618!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-824670 -n old-k8s-version-824670
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-824670 -n old-k8s-version-824670: exit status 2 (339.5997ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-824670 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-824670
helpers_test.go:244: (dbg) docker inspect old-k8s-version-824670:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5ab927c640d07c092293740e22de0c0f3921727ef65e3c1cdd16508a3906f60f",
	        "Created": "2025-12-12T20:08:24.734370557Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 278223,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T20:09:35.041301945Z",
	            "FinishedAt": "2025-12-12T20:09:34.234827669Z"
	        },
	        "Image": "sha256:1ca69fb46d667873e297ff8975852d6be818eb624529e750e2b09ff0d3b0c367",
	        "ResolvConfPath": "/var/lib/docker/containers/5ab927c640d07c092293740e22de0c0f3921727ef65e3c1cdd16508a3906f60f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5ab927c640d07c092293740e22de0c0f3921727ef65e3c1cdd16508a3906f60f/hostname",
	        "HostsPath": "/var/lib/docker/containers/5ab927c640d07c092293740e22de0c0f3921727ef65e3c1cdd16508a3906f60f/hosts",
	        "LogPath": "/var/lib/docker/containers/5ab927c640d07c092293740e22de0c0f3921727ef65e3c1cdd16508a3906f60f/5ab927c640d07c092293740e22de0c0f3921727ef65e3c1cdd16508a3906f60f-json.log",
	        "Name": "/old-k8s-version-824670",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-824670:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-824670",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5ab927c640d07c092293740e22de0c0f3921727ef65e3c1cdd16508a3906f60f",
	                "LowerDir": "/var/lib/docker/overlay2/30c86b0c2116c0f48f8210ea61a7592baf17dd790ff6789c3f29325d1db1d409-init/diff:/var/lib/docker/overlay2/87d8f1d6be832394c20ec55eb084b3f4a084ca786856584bd9e90cdb45432d3c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/30c86b0c2116c0f48f8210ea61a7592baf17dd790ff6789c3f29325d1db1d409/merged",
	                "UpperDir": "/var/lib/docker/overlay2/30c86b0c2116c0f48f8210ea61a7592baf17dd790ff6789c3f29325d1db1d409/diff",
	                "WorkDir": "/var/lib/docker/overlay2/30c86b0c2116c0f48f8210ea61a7592baf17dd790ff6789c3f29325d1db1d409/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-824670",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-824670/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-824670",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-824670",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-824670",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c1743c2c5c4a69caa9bb214fc5176e0ae400b8e95bfca84d34e9165abc31fb47",
	            "SandboxKey": "/var/run/docker/netns/c1743c2c5c4a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-824670": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "54eba6dc9ad901e89d943167287789d7ba6943774fa37cc0a202f7a86e0bfc9a",
	                    "EndpointID": "bc472e2797f41014170966e3081b872df44a837f122a80f63a7242dff4a1c896",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "62:80:cf:65:55:59",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-824670",
	                        "5ab927c640d0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-824670 -n old-k8s-version-824670
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-824670 -n old-k8s-version-824670: exit status 2 (362.363366ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-824670 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-824670 logs -n 25: (1.096821471s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p pause-243084                                                                                                                                                                                                                               │ pause-243084                 │ jenkins │ v1.37.0 │ 12 Dec 25 20:05 UTC │ 12 Dec 25 20:05 UTC │
	│ start   │ -p stopped-upgrade-180826 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                                                                                                                          │ stopped-upgrade-180826       │ jenkins │ v1.35.0 │ 12 Dec 25 20:05 UTC │ 12 Dec 25 20:06 UTC │
	│ stop    │ -p kubernetes-upgrade-991615                                                                                                                                                                                                                  │ kubernetes-upgrade-991615    │ jenkins │ v1.37.0 │ 12 Dec 25 20:05 UTC │ 12 Dec 25 20:06 UTC │
	│ stop    │ stopped-upgrade-180826 stop                                                                                                                                                                                                                   │ stopped-upgrade-180826       │ jenkins │ v1.35.0 │ 12 Dec 25 20:06 UTC │ 12 Dec 25 20:06 UTC │
	│ start   │ -p stopped-upgrade-180826 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ stopped-upgrade-180826       │ jenkins │ v1.37.0 │ 12 Dec 25 20:06 UTC │ 12 Dec 25 20:10 UTC │
	│ start   │ -p kubernetes-upgrade-991615 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                               │ kubernetes-upgrade-991615    │ jenkins │ v1.37.0 │ 12 Dec 25 20:06 UTC │ 12 Dec 25 20:10 UTC │
	│ delete  │ -p running-upgrade-569692                                                                                                                                                                                                                     │ running-upgrade-569692       │ jenkins │ v1.37.0 │ 12 Dec 25 20:08 UTC │ 12 Dec 25 20:08 UTC │
	│ start   │ -p old-k8s-version-824670 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:08 UTC │ 12 Dec 25 20:09 UTC │
	│ start   │ -p cert-expiration-070436 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-070436       │ jenkins │ v1.37.0 │ 12 Dec 25 20:08 UTC │ 12 Dec 25 20:08 UTC │
	│ delete  │ -p cert-expiration-070436                                                                                                                                                                                                                     │ cert-expiration-070436       │ jenkins │ v1.37.0 │ 12 Dec 25 20:08 UTC │ 12 Dec 25 20:08 UTC │
	│ start   │ -p no-preload-753103 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:08 UTC │ 12 Dec 25 20:09 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-824670 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │                     │
	│ stop    │ -p old-k8s-version-824670 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:09 UTC │
	│ addons  │ enable metrics-server -p no-preload-753103 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │                     │
	│ stop    │ -p no-preload-753103 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:09 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-824670 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:09 UTC │
	│ start   │ -p old-k8s-version-824670 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:10 UTC │
	│ addons  │ enable dashboard -p no-preload-753103 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:09 UTC │
	│ start   │ -p no-preload-753103 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:10 UTC │
	│ delete  │ -p stopped-upgrade-180826                                                                                                                                                                                                                     │ stopped-upgrade-180826       │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ start   │ -p kubernetes-upgrade-991615 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-991615    │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ start   │ -p kubernetes-upgrade-991615 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                               │ kubernetes-upgrade-991615    │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ start   │ -p default-k8s-diff-port-433034 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-433034 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ image   │ old-k8s-version-824670 image list --format=json                                                                                                                                                                                               │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ pause   │ -p old-k8s-version-824670 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:10:31
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:10:31.092369  289770 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:10:31.092499  289770 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:10:31.092510  289770 out.go:374] Setting ErrFile to fd 2...
	I1212 20:10:31.092517  289770 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:10:31.092742  289770 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 20:10:31.093222  289770 out.go:368] Setting JSON to false
	I1212 20:10:31.094372  289770 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3178,"bootTime":1765567053,"procs":307,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 20:10:31.094425  289770 start.go:143] virtualization: kvm guest
	I1212 20:10:31.097386  289770 out.go:179] * [default-k8s-diff-port-433034] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 20:10:31.098577  289770 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:10:31.098566  289770 notify.go:221] Checking for updates...
	I1212 20:10:31.099705  289770 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:10:31.101204  289770 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 20:10:31.103294  289770 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-5703/.minikube
	I1212 20:10:31.104421  289770 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 20:10:31.105708  289770 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:10:31.107355  289770 config.go:182] Loaded profile config "kubernetes-upgrade-991615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:10:31.107500  289770 config.go:182] Loaded profile config "no-preload-753103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:10:31.107606  289770 config.go:182] Loaded profile config "old-k8s-version-824670": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1212 20:10:31.107711  289770 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:10:31.132806  289770 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1212 20:10:31.132895  289770 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:10:31.191322  289770 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-12 20:10:31.180809814 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 20:10:31.191449  289770 docker.go:319] overlay module found
	I1212 20:10:31.192867  289770 out.go:179] * Using the docker driver based on user configuration
	I1212 20:10:31.193769  289770 start.go:309] selected driver: docker
	I1212 20:10:31.193781  289770 start.go:927] validating driver "docker" against <nil>
	I1212 20:10:31.193793  289770 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:10:31.194404  289770 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:10:31.251873  289770 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-12 20:10:31.242457065 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 20:10:31.252077  289770 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 20:10:31.252367  289770 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 20:10:31.254044  289770 out.go:179] * Using Docker driver with root privileges
	I1212 20:10:31.255130  289770 cni.go:84] Creating CNI manager for ""
	I1212 20:10:31.255218  289770 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:10:31.255232  289770 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 20:10:31.255332  289770 start.go:353] cluster config:
	{Name:default-k8s-diff-port-433034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-433034 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:10:31.256590  289770 out.go:179] * Starting "default-k8s-diff-port-433034" primary control-plane node in "default-k8s-diff-port-433034" cluster
	I1212 20:10:31.257734  289770 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 20:10:31.258833  289770 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 20:10:31.259802  289770 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:10:31.259832  289770 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1212 20:10:31.259840  289770 cache.go:65] Caching tarball of preloaded images
	I1212 20:10:31.259908  289770 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 20:10:31.259936  289770 preload.go:238] Found /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 20:10:31.259947  289770 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 20:10:31.260068  289770 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/config.json ...
	I1212 20:10:31.260094  289770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/config.json: {Name:mk2c21e68b4efac900e806b240e66ee91e145ac6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:31.282021  289770 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 20:10:31.282040  289770 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 20:10:31.282057  289770 cache.go:243] Successfully downloaded all kic artifacts
	I1212 20:10:31.282099  289770 start.go:360] acquireMachinesLock for default-k8s-diff-port-433034: {Name:mke664e0cef6403e9169218e4c6b7e74b7d0b1f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:10:31.282209  289770 start.go:364] duration metric: took 89.035µs to acquireMachinesLock for "default-k8s-diff-port-433034"
	I1212 20:10:31.282240  289770 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-433034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-433034 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:10:31.282349  289770 start.go:125] createHost starting for "" (driver="docker")
	W1212 20:10:31.281973  281225 pod_ready.go:104] pod "coredns-7d764666f9-pbqw6" is not "Ready", error: <nil>
	I1212 20:10:31.781373  281225 pod_ready.go:94] pod "coredns-7d764666f9-pbqw6" is "Ready"
	I1212 20:10:31.781404  281225 pod_ready.go:86] duration metric: took 36.505782081s for pod "coredns-7d764666f9-pbqw6" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:10:31.783899  281225 pod_ready.go:83] waiting for pod "etcd-no-preload-753103" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:10:31.788121  281225 pod_ready.go:94] pod "etcd-no-preload-753103" is "Ready"
	I1212 20:10:31.788165  281225 pod_ready.go:86] duration metric: took 4.243189ms for pod "etcd-no-preload-753103" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:10:31.790258  281225 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-753103" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:10:31.794394  281225 pod_ready.go:94] pod "kube-apiserver-no-preload-753103" is "Ready"
	I1212 20:10:31.794415  281225 pod_ready.go:86] duration metric: took 4.116838ms for pod "kube-apiserver-no-preload-753103" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:10:31.796266  281225 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-753103" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:10:31.980941  281225 pod_ready.go:94] pod "kube-controller-manager-no-preload-753103" is "Ready"
	I1212 20:10:31.980968  281225 pod_ready.go:86] duration metric: took 184.654563ms for pod "kube-controller-manager-no-preload-753103" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:10:32.180572  281225 pod_ready.go:83] waiting for pod "kube-proxy-xn425" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:10:32.579774  281225 pod_ready.go:94] pod "kube-proxy-xn425" is "Ready"
	I1212 20:10:32.579798  281225 pod_ready.go:86] duration metric: took 399.015848ms for pod "kube-proxy-xn425" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:10:32.781895  281225 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-753103" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:10:33.179177  281225 pod_ready.go:94] pod "kube-scheduler-no-preload-753103" is "Ready"
	I1212 20:10:33.179205  281225 pod_ready.go:86] duration metric: took 397.281172ms for pod "kube-scheduler-no-preload-753103" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:10:33.179218  281225 pod_ready.go:40] duration metric: took 37.906240265s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 20:10:33.226381  281225 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1212 20:10:33.231196  281225 out.go:179] * Done! kubectl is now configured to use "no-preload-753103" cluster and "default" namespace by default
	I1212 20:10:30.554571  289388 out.go:252] * Updating the running docker "kubernetes-upgrade-991615" container ...
	I1212 20:10:30.554615  289388 machine.go:94] provisionDockerMachine start ...
	I1212 20:10:30.554685  289388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-991615
	I1212 20:10:30.572579  289388 main.go:143] libmachine: Using SSH client type: native
	I1212 20:10:30.572867  289388 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33054 <nil> <nil>}
	I1212 20:10:30.572881  289388 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 20:10:30.705485  289388 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-991615
	
	I1212 20:10:30.705519  289388 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-991615"
	I1212 20:10:30.705590  289388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-991615
	I1212 20:10:30.724906  289388 main.go:143] libmachine: Using SSH client type: native
	I1212 20:10:30.725112  289388 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33054 <nil> <nil>}
	I1212 20:10:30.725129  289388 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-991615 && echo "kubernetes-upgrade-991615" | sudo tee /etc/hostname
	I1212 20:10:30.867808  289388 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-991615
	
	I1212 20:10:30.867880  289388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-991615
	I1212 20:10:30.886535  289388 main.go:143] libmachine: Using SSH client type: native
	I1212 20:10:30.886775  289388 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33054 <nil> <nil>}
	I1212 20:10:30.886795  289388 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-991615' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-991615/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-991615' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:10:31.021960  289388 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:10:31.021986  289388 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-5703/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-5703/.minikube}
	I1212 20:10:31.022015  289388 ubuntu.go:190] setting up certificates
	I1212 20:10:31.022038  289388 provision.go:84] configureAuth start
	I1212 20:10:31.022102  289388 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-991615
	I1212 20:10:31.042776  289388 provision.go:143] copyHostCerts
	I1212 20:10:31.042850  289388 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-5703/.minikube/cert.pem, removing ...
	I1212 20:10:31.042871  289388 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-5703/.minikube/cert.pem
	I1212 20:10:31.042950  289388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-5703/.minikube/cert.pem (1123 bytes)
	I1212 20:10:31.043114  289388 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-5703/.minikube/key.pem, removing ...
	I1212 20:10:31.043129  289388 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-5703/.minikube/key.pem
	I1212 20:10:31.043181  289388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-5703/.minikube/key.pem (1679 bytes)
	I1212 20:10:31.043294  289388 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-5703/.minikube/ca.pem, removing ...
	I1212 20:10:31.043305  289388 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-5703/.minikube/ca.pem
	I1212 20:10:31.043343  289388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-5703/.minikube/ca.pem (1078 bytes)
	I1212 20:10:31.043417  289388 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-5703/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-991615 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-991615 localhost minikube]
	I1212 20:10:31.068478  289388 provision.go:177] copyRemoteCerts
	I1212 20:10:31.068545  289388 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:10:31.068593  289388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-991615
	I1212 20:10:31.088900  289388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/kubernetes-upgrade-991615/id_rsa Username:docker}
	I1212 20:10:31.188736  289388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 20:10:31.206971  289388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 20:10:31.226664  289388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1212 20:10:31.245954  289388 provision.go:87] duration metric: took 223.890884ms to configureAuth
	I1212 20:10:31.245984  289388 ubuntu.go:206] setting minikube options for container-runtime
	I1212 20:10:31.246148  289388 config.go:182] Loaded profile config "kubernetes-upgrade-991615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:10:31.246266  289388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-991615
	I1212 20:10:31.266064  289388 main.go:143] libmachine: Using SSH client type: native
	I1212 20:10:31.266378  289388 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33054 <nil> <nil>}
	I1212 20:10:31.266402  289388 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 20:10:31.855932  289388 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 20:10:31.855961  289388 machine.go:97] duration metric: took 1.301335313s to provisionDockerMachine
	I1212 20:10:31.855978  289388 start.go:293] postStartSetup for "kubernetes-upgrade-991615" (driver="docker")
	I1212 20:10:31.855994  289388 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:10:31.856063  289388 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:10:31.856138  289388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-991615
	I1212 20:10:31.877769  289388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/kubernetes-upgrade-991615/id_rsa Username:docker}
	I1212 20:10:31.978246  289388 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:10:31.982576  289388 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 20:10:31.982603  289388 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 20:10:31.982615  289388 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-5703/.minikube/addons for local assets ...
	I1212 20:10:31.982661  289388 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-5703/.minikube/files for local assets ...
	I1212 20:10:31.982760  289388 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/ssl/certs/92542.pem -> 92542.pem in /etc/ssl/certs
	I1212 20:10:31.982894  289388 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 20:10:31.990731  289388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/ssl/certs/92542.pem --> /etc/ssl/certs/92542.pem (1708 bytes)
	I1212 20:10:32.012415  289388 start.go:296] duration metric: took 156.4219ms for postStartSetup
	I1212 20:10:32.012492  289388 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:10:32.012546  289388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-991615
	I1212 20:10:32.036429  289388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/kubernetes-upgrade-991615/id_rsa Username:docker}
	I1212 20:10:32.140478  289388 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 20:10:32.145360  289388 fix.go:56] duration metric: took 1.610554046s for fixHost
	I1212 20:10:32.145386  289388 start.go:83] releasing machines lock for "kubernetes-upgrade-991615", held for 1.610603959s
	I1212 20:10:32.145460  289388 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-991615
	I1212 20:10:32.165940  289388 ssh_runner.go:195] Run: cat /version.json
	I1212 20:10:32.165991  289388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-991615
	I1212 20:10:32.166054  289388 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 20:10:32.166155  289388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-991615
	I1212 20:10:32.186262  289388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/kubernetes-upgrade-991615/id_rsa Username:docker}
	I1212 20:10:32.186636  289388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/kubernetes-upgrade-991615/id_rsa Username:docker}
	I1212 20:10:32.279535  289388 ssh_runner.go:195] Run: systemctl --version
	I1212 20:10:32.364435  289388 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 20:10:32.412301  289388 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 20:10:32.417820  289388 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:10:32.417898  289388 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:10:32.427698  289388 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 20:10:32.427715  289388 start.go:496] detecting cgroup driver to use...
	I1212 20:10:32.427742  289388 detect.go:190] detected "systemd" cgroup driver on host os
	I1212 20:10:32.427781  289388 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:10:32.445634  289388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:10:32.459382  289388 docker.go:218] disabling cri-docker service (if available) ...
	I1212 20:10:32.459446  289388 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 20:10:32.475690  289388 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 20:10:32.490874  289388 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 20:10:32.616384  289388 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 20:10:32.729232  289388 docker.go:234] disabling docker service ...
	I1212 20:10:32.729362  289388 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 20:10:32.745207  289388 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 20:10:32.759450  289388 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 20:10:32.884102  289388 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 20:10:32.992590  289388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:10:33.005602  289388 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:10:33.020233  289388 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 20:10:33.020330  289388 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:10:33.029119  289388 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1212 20:10:33.029177  289388 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:10:33.037867  289388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:10:33.046920  289388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:10:33.055451  289388 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 20:10:33.064437  289388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:10:33.074459  289388 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:10:33.083822  289388 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:10:33.093440  289388 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 20:10:33.102629  289388 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 20:10:33.111457  289388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:10:33.253141  289388 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 20:10:35.363722  289388 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.110544231s)
	I1212 20:10:35.363752  289388 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 20:10:35.363805  289388 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 20:10:35.367997  289388 start.go:564] Will wait 60s for crictl version
	I1212 20:10:35.368082  289388 ssh_runner.go:195] Run: which crictl
	I1212 20:10:35.372015  289388 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 20:10:35.400949  289388 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 20:10:35.401043  289388 ssh_runner.go:195] Run: crio --version
	I1212 20:10:35.438771  289388 ssh_runner.go:195] Run: crio --version
	I1212 20:10:35.475757  289388 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1212 20:10:31.283931  289770 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1212 20:10:31.284161  289770 start.go:159] libmachine.API.Create for "default-k8s-diff-port-433034" (driver="docker")
	I1212 20:10:31.284194  289770 client.go:173] LocalClient.Create starting
	I1212 20:10:31.284269  289770 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem
	I1212 20:10:31.284344  289770 main.go:143] libmachine: Decoding PEM data...
	I1212 20:10:31.284367  289770 main.go:143] libmachine: Parsing certificate...
	I1212 20:10:31.284430  289770 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem
	I1212 20:10:31.284457  289770 main.go:143] libmachine: Decoding PEM data...
	I1212 20:10:31.284477  289770 main.go:143] libmachine: Parsing certificate...
	I1212 20:10:31.284819  289770 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-433034 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 20:10:31.300874  289770 cli_runner.go:211] docker network inspect default-k8s-diff-port-433034 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 20:10:31.300921  289770 network_create.go:284] running [docker network inspect default-k8s-diff-port-433034] to gather additional debugging logs...
	I1212 20:10:31.300936  289770 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-433034
	W1212 20:10:31.319451  289770 cli_runner.go:211] docker network inspect default-k8s-diff-port-433034 returned with exit code 1
	I1212 20:10:31.319489  289770 network_create.go:287] error running [docker network inspect default-k8s-diff-port-433034]: docker network inspect default-k8s-diff-port-433034: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-433034 not found
	I1212 20:10:31.319507  289770 network_create.go:289] output of [docker network inspect default-k8s-diff-port-433034]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-433034 not found
	
	** /stderr **
	I1212 20:10:31.319657  289770 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:10:31.339641  289770 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-74442dadd84e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:ff:80:da:a9:72} reservation:<nil>}
	I1212 20:10:31.340965  289770 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-26148288ab51 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:82:49:cc:21:29:a7} reservation:<nil>}
	I1212 20:10:31.342179  289770 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3684d3b926aa IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2a:5e:c7:18:99:d2} reservation:<nil>}
	I1212 20:10:31.343066  289770 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-09b123768b60 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:2e:6c:50:8a:dd:de} reservation:<nil>}
	I1212 20:10:31.343981  289770 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-c5f00e9d4498 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:36:57:36:b3:ba:39} reservation:<nil>}
	I1212 20:10:31.344741  289770 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-54eba6dc9ad9 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:16:5a:a5:29:47:57} reservation:<nil>}
	I1212 20:10:31.345623  289770 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e8be60}
	I1212 20:10:31.345691  289770 network_create.go:124] attempt to create docker network default-k8s-diff-port-433034 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1212 20:10:31.345752  289770 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-433034 default-k8s-diff-port-433034
	I1212 20:10:31.398679  289770 network_create.go:108] docker network default-k8s-diff-port-433034 192.168.103.0/24 created
	I1212 20:10:31.398711  289770 kic.go:121] calculated static IP "192.168.103.2" for the "default-k8s-diff-port-433034" container
	I1212 20:10:31.398775  289770 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 20:10:31.417817  289770 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-433034 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-433034 --label created_by.minikube.sigs.k8s.io=true
	I1212 20:10:31.436947  289770 oci.go:103] Successfully created a docker volume default-k8s-diff-port-433034
	I1212 20:10:31.437045  289770 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-433034-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-433034 --entrypoint /usr/bin/test -v default-k8s-diff-port-433034:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -d /var/lib
	I1212 20:10:31.907082  289770 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-433034
	I1212 20:10:31.907159  289770 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:10:31.907173  289770 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 20:10:31.907251  289770 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-433034:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 20:10:35.203687  289770 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-433034:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -I lz4 -xf /preloaded.tar -C /extractDir: (3.296367101s)
	I1212 20:10:35.203722  289770 kic.go:203] duration metric: took 3.296545269s to extract preloaded images to volume ...
	W1212 20:10:35.203791  289770 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1212 20:10:35.203818  289770 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1212 20:10:35.203851  289770 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 20:10:35.270383  289770 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-433034 --name default-k8s-diff-port-433034 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-433034 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-433034 --network default-k8s-diff-port-433034 --ip 192.168.103.2 --volume default-k8s-diff-port-433034:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138
	I1212 20:10:35.586977  289770 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-433034 --format={{.State.Running}}
	I1212 20:10:35.609189  289770 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-433034 --format={{.State.Status}}
	I1212 20:10:35.627783  289770 cli_runner.go:164] Run: docker exec default-k8s-diff-port-433034 stat /var/lib/dpkg/alternatives/iptables
	I1212 20:10:35.679878  289770 oci.go:144] the created container "default-k8s-diff-port-433034" has a running status.
	I1212 20:10:35.679917  289770 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22112-5703/.minikube/machines/default-k8s-diff-port-433034/id_rsa...
	I1212 20:10:35.788670  289770 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22112-5703/.minikube/machines/default-k8s-diff-port-433034/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 20:10:35.818545  289770 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-433034 --format={{.State.Status}}
	I1212 20:10:35.844312  289770 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 20:10:35.844335  289770 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-433034 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 20:10:35.890484  289770 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-433034 --format={{.State.Status}}
	I1212 20:10:35.923498  289770 machine.go:94] provisionDockerMachine start ...
	I1212 20:10:35.923636  289770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-433034
	I1212 20:10:35.951916  289770 main.go:143] libmachine: Using SSH client type: native
	I1212 20:10:35.952392  289770 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33079 <nil> <nil>}
	I1212 20:10:35.952421  289770 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 20:10:35.953082  289770 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55190->127.0.0.1:33079: read: connection reset by peer
	I1212 20:10:35.477071  289388 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-991615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:10:35.500450  289388 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1212 20:10:35.504891  289388 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-991615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-991615 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 20:10:35.504998  289388 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 20:10:35.505053  289388 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:10:35.540862  289388 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:10:35.540889  289388 crio.go:433] Images already preloaded, skipping extraction
	I1212 20:10:35.540939  289388 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:10:35.568867  289388 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:10:35.568889  289388 cache_images.go:86] Images are preloaded, skipping loading
	I1212 20:10:35.568899  289388 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1212 20:10:35.569008  289388 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-991615 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-991615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 20:10:35.569126  289388 ssh_runner.go:195] Run: crio config
	I1212 20:10:35.631947  289388 cni.go:84] Creating CNI manager for ""
	I1212 20:10:35.631974  289388 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:10:35.631994  289388 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 20:10:35.632037  289388 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-991615 NodeName:kubernetes-upgrade-991615 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:10:35.632211  289388 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-991615"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:10:35.632319  289388 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 20:10:35.641819  289388 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 20:10:35.641892  289388 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 20:10:35.652492  289388 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1212 20:10:35.667516  289388 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 20:10:35.683358  289388 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I1212 20:10:35.698013  289388 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1212 20:10:35.701986  289388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:10:35.829024  289388 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:10:35.849379  289388 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/kubernetes-upgrade-991615 for IP: 192.168.76.2
	I1212 20:10:35.849405  289388 certs.go:195] generating shared ca certs ...
	I1212 20:10:35.849430  289388 certs.go:227] acquiring lock for ca certs: {Name:mk2b2b547b64ae18c808f2dedd3d3cb8fa4a59ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:35.849591  289388 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-5703/.minikube/ca.key
	I1212 20:10:35.849663  289388 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.key
	I1212 20:10:35.849677  289388 certs.go:257] generating profile certs ...
	I1212 20:10:35.849797  289388 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/kubernetes-upgrade-991615/client.key
	I1212 20:10:35.849874  289388 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/kubernetes-upgrade-991615/apiserver.key.ea3e49e7
	I1212 20:10:35.849984  289388 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/kubernetes-upgrade-991615/proxy-client.key
	I1212 20:10:35.850156  289388 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/9254.pem (1338 bytes)
	W1212 20:10:35.850212  289388 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-5703/.minikube/certs/9254_empty.pem, impossibly tiny 0 bytes
	I1212 20:10:35.850227  289388 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 20:10:35.850286  289388 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem (1078 bytes)
	I1212 20:10:35.850325  289388 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:10:35.850355  289388 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/key.pem (1679 bytes)
	I1212 20:10:35.850415  289388 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/ssl/certs/92542.pem (1708 bytes)
	I1212 20:10:35.851244  289388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:10:35.874866  289388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 20:10:35.899731  289388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:10:35.927570  289388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 20:10:35.955856  289388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/kubernetes-upgrade-991615/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1212 20:10:35.973972  289388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/kubernetes-upgrade-991615/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 20:10:35.997679  289388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/kubernetes-upgrade-991615/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:10:36.020392  289388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/kubernetes-upgrade-991615/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 20:10:36.039858  289388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:10:36.059740  289388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/certs/9254.pem --> /usr/share/ca-certificates/9254.pem (1338 bytes)
	I1212 20:10:36.078500  289388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/ssl/certs/92542.pem --> /usr/share/ca-certificates/92542.pem (1708 bytes)
	I1212 20:10:36.098420  289388 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 20:10:36.112672  289388 ssh_runner.go:195] Run: openssl version
	I1212 20:10:36.119082  289388 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:10:36.127013  289388 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 20:10:36.135388  289388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:10:36.140204  289388 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:10:36.140259  289388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:10:36.190186  289388 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 20:10:36.199485  289388 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9254.pem
	I1212 20:10:36.207846  289388 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9254.pem /etc/ssl/certs/9254.pem
	I1212 20:10:36.216832  289388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9254.pem
	I1212 20:10:36.221027  289388 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:38 /usr/share/ca-certificates/9254.pem
	I1212 20:10:36.221081  289388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9254.pem
	I1212 20:10:36.265388  289388 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 20:10:36.274121  289388 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/92542.pem
	I1212 20:10:36.281927  289388 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/92542.pem /etc/ssl/certs/92542.pem
	I1212 20:10:36.289836  289388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92542.pem
	I1212 20:10:36.294123  289388 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:38 /usr/share/ca-certificates/92542.pem
	I1212 20:10:36.294179  289388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92542.pem
	I1212 20:10:36.331471  289388 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 20:10:36.338701  289388 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:10:36.342511  289388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 20:10:36.377418  289388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 20:10:36.419361  289388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 20:10:36.458587  289388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 20:10:36.499838  289388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 20:10:36.536006  289388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 20:10:36.577454  289388 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-991615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-991615 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:10:36.577548  289388 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:10:36.577624  289388 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:10:36.609026  289388 cri.go:89] found id: "f017457aed1bb3185029d332fbb3dfb05a5cbfd611f2494bf988c551ac1773f5"
	I1212 20:10:36.609042  289388 cri.go:89] found id: "268ddee65ac17907cd859d8179dd59629c3e3cf161eda8760700e0cefcb5271b"
	I1212 20:10:36.609045  289388 cri.go:89] found id: "d096a1bc2dcd8e85800536e8d3756c84b19dfd55922dacc368311ca2c3da87b8"
	I1212 20:10:36.609049  289388 cri.go:89] found id: "79a759fba67a6435a9c74df16ee791711241e83b0534c3958dba8e354feaa87f"
	I1212 20:10:36.609051  289388 cri.go:89] found id: ""
	I1212 20:10:36.609086  289388 ssh_runner.go:195] Run: sudo runc list -f json
	W1212 20:10:36.621676  289388 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:10:36Z" level=error msg="open /run/runc: no such file or directory"
	I1212 20:10:36.621743  289388 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 20:10:36.630711  289388 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 20:10:36.630729  289388 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 20:10:36.630770  289388 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 20:10:36.638067  289388 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:10:36.638782  289388 kubeconfig.go:125] found "kubernetes-upgrade-991615" server: "https://192.168.76.2:8443"
	I1212 20:10:36.639838  289388 kapi.go:59] client config for kubernetes-upgrade-991615: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-5703/.minikube/profiles/kubernetes-upgrade-991615/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-5703/.minikube/profiles/kubernetes-upgrade-991615/client.key", CAFile:"/home/jenkins/minikube-integration/22112-5703/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 20:10:36.640395  289388 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1212 20:10:36.640416  289388 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1212 20:10:36.640432  289388 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1212 20:10:36.640438  289388 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1212 20:10:36.640448  289388 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1212 20:10:36.640781  289388 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 20:10:36.649753  289388 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1212 20:10:36.649783  289388 kubeadm.go:602] duration metric: took 19.04679ms to restartPrimaryControlPlane
	I1212 20:10:36.649793  289388 kubeadm.go:403] duration metric: took 72.348943ms to StartCluster
	I1212 20:10:36.649810  289388 settings.go:142] acquiring lock: {Name:mk7b47e673a4fe47e1d03b804f21c4b8c19bdcb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:36.649866  289388 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 20:10:36.650935  289388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/kubeconfig: {Name:mkf945a9079d724e4b1deff5cd122f8b2719dc1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:36.651181  289388 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:10:36.651258  289388 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 20:10:36.651372  289388 addons.go:70] Setting storage-provisioner=true in profile "kubernetes-upgrade-991615"
	I1212 20:10:36.651393  289388 addons.go:239] Setting addon storage-provisioner=true in "kubernetes-upgrade-991615"
	I1212 20:10:36.651391  289388 addons.go:70] Setting default-storageclass=true in profile "kubernetes-upgrade-991615"
	I1212 20:10:36.651424  289388 config.go:182] Loaded profile config "kubernetes-upgrade-991615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:10:36.651435  289388 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-991615"
	W1212 20:10:36.651402  289388 addons.go:248] addon storage-provisioner should already be in state true
	I1212 20:10:36.651533  289388 host.go:66] Checking if "kubernetes-upgrade-991615" exists ...
	I1212 20:10:36.651710  289388 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-991615 --format={{.State.Status}}
	I1212 20:10:36.652036  289388 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-991615 --format={{.State.Status}}
	I1212 20:10:36.653716  289388 out.go:179] * Verifying Kubernetes components...
	I1212 20:10:36.655003  289388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:10:36.673361  289388 kapi.go:59] client config for kubernetes-upgrade-991615: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-5703/.minikube/profiles/kubernetes-upgrade-991615/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-5703/.minikube/profiles/kubernetes-upgrade-991615/client.key", CAFile:"/home/jenkins/minikube-integration/22112-5703/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 20:10:36.673696  289388 addons.go:239] Setting addon default-storageclass=true in "kubernetes-upgrade-991615"
	W1212 20:10:36.673711  289388 addons.go:248] addon default-storageclass should already be in state true
	I1212 20:10:36.673736  289388 host.go:66] Checking if "kubernetes-upgrade-991615" exists ...
	I1212 20:10:36.674163  289388 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-991615 --format={{.State.Status}}
	I1212 20:10:36.677711  289388 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	
	==> CRI-O <==
	Dec 12 20:10:03 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:03.130016878Z" level=info msg="Started container" PID=1740 containerID=ff61be8c0d949c0126ec6ce146b85226100e94bf6dcf66f97fff1bb2a3e045f6 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lxjjc/dashboard-metrics-scraper id=26910298-1bd6-4052-bdc1-611f74d1b2fe name=/runtime.v1.RuntimeService/StartContainer sandboxID=faf9c19dd039b2d54b154af0c4134eea965eb191717a655a8e06b6a8444af64c
	Dec 12 20:10:04 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:04.082688548Z" level=info msg="Removing container: 2b7485f50c717e793408ba234982a40874bcd761bb998b74dc2494c3641d8f93" id=4bb008d1-b4c6-460a-ba15-977974117b4c name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 20:10:04 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:04.096835685Z" level=info msg="Removed container 2b7485f50c717e793408ba234982a40874bcd761bb998b74dc2494c3641d8f93: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lxjjc/dashboard-metrics-scraper" id=4bb008d1-b4c6-460a-ba15-977974117b4c name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 20:10:15 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:15.106110221Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=13966bb1-0a08-4777-bee2-5ec74f5e4959 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:10:15 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:15.106955903Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e8362b3f-cc65-42fc-b824-6b562bdc3028 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:10:15 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:15.107849596Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=bc28dcb2-3695-4859-a6c9-93766ad5a985 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:10:15 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:15.107984404Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:10:15 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:15.112114053Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:10:15 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:15.112304116Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/21fcf637adacbadae04bd7d52de28d9253864fa1364fe6be3915c6f716cee54e/merged/etc/passwd: no such file or directory"
	Dec 12 20:10:15 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:15.112340304Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/21fcf637adacbadae04bd7d52de28d9253864fa1364fe6be3915c6f716cee54e/merged/etc/group: no such file or directory"
	Dec 12 20:10:15 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:15.112592562Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:10:15 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:15.137573134Z" level=info msg="Created container d1429ec9716928945e4999afe1474e549e2f08279150e50974bc3e09ff1158bd: kube-system/storage-provisioner/storage-provisioner" id=bc28dcb2-3695-4859-a6c9-93766ad5a985 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:10:15 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:15.138071637Z" level=info msg="Starting container: d1429ec9716928945e4999afe1474e549e2f08279150e50974bc3e09ff1158bd" id=e625601e-e604-4b6b-8501-3dfb2ac0675d name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 20:10:15 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:15.139783271Z" level=info msg="Started container" PID=1758 containerID=d1429ec9716928945e4999afe1474e549e2f08279150e50974bc3e09ff1158bd description=kube-system/storage-provisioner/storage-provisioner id=e625601e-e604-4b6b-8501-3dfb2ac0675d name=/runtime.v1.RuntimeService/StartContainer sandboxID=8dd4bd5ac9d48ec6c4056c64283b06d745457f1702547c25126e2f9e5730328e
	Dec 12 20:10:21 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:21.002748405Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=7d49aaac-dd14-4bfb-942a-adada94382b0 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:10:21 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:21.003657357Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=85c815e3-57f9-4348-a70a-0ab9b198edbb name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:10:21 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:21.004571024Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lxjjc/dashboard-metrics-scraper" id=a6e722d4-8c5a-4781-aa0c-99c9afc886e8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:10:21 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:21.004682342Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:10:21 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:21.011420333Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:10:21 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:21.011869857Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:10:21 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:21.04448839Z" level=info msg="Created container 383f605274c473be6263ace19467c98d6a96fad804f4f87b75e17354e7e18b9d: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lxjjc/dashboard-metrics-scraper" id=a6e722d4-8c5a-4781-aa0c-99c9afc886e8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:10:21 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:21.044925537Z" level=info msg="Starting container: 383f605274c473be6263ace19467c98d6a96fad804f4f87b75e17354e7e18b9d" id=4609af78-20a8-4dba-93d2-278d85f8dcc0 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 20:10:21 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:21.046835912Z" level=info msg="Started container" PID=1795 containerID=383f605274c473be6263ace19467c98d6a96fad804f4f87b75e17354e7e18b9d description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lxjjc/dashboard-metrics-scraper id=4609af78-20a8-4dba-93d2-278d85f8dcc0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=faf9c19dd039b2d54b154af0c4134eea965eb191717a655a8e06b6a8444af64c
	Dec 12 20:10:21 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:21.12360397Z" level=info msg="Removing container: ff61be8c0d949c0126ec6ce146b85226100e94bf6dcf66f97fff1bb2a3e045f6" id=dc810208-ccda-4a81-a3b3-47b63825e644 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 20:10:21 old-k8s-version-824670 crio[572]: time="2025-12-12T20:10:21.1320909Z" level=info msg="Removed container ff61be8c0d949c0126ec6ce146b85226100e94bf6dcf66f97fff1bb2a3e045f6: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lxjjc/dashboard-metrics-scraper" id=dc810208-ccda-4a81-a3b3-47b63825e644 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	383f605274c47       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           16 seconds ago      Exited              dashboard-metrics-scraper   2                   faf9c19dd039b       dashboard-metrics-scraper-5f989dc9cf-lxjjc       kubernetes-dashboard
	d1429ec971692       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   8dd4bd5ac9d48       storage-provisioner                              kube-system
	bf67adb0b535c       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   36 seconds ago      Running             kubernetes-dashboard        0                   bee0c99eecce1       kubernetes-dashboard-8694d4445c-8xmbb            kubernetes-dashboard
	5de41c6aa492f       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           53 seconds ago      Running             coredns                     0                   86af7d39f8a88       coredns-5dd5756b68-shgbw                         kube-system
	c53a92a4cffb4       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   0744b65fc0764       busybox                                          default
	2b7d114503f1e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   0fa0cfb1fd5c0       kindnet-75qr9                                    kube-system
	791996c96c157       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           53 seconds ago      Running             kube-proxy                  0                   93c8b346fbedd       kube-proxy-nwrgl                                 kube-system
	74f2f8fee475f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   8dd4bd5ac9d48       storage-provisioner                              kube-system
	9a180d91c2d49       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           56 seconds ago      Running             kube-controller-manager     0                   7bc59168f21f1       kube-controller-manager-old-k8s-version-824670   kube-system
	849ed107c3cf4       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           56 seconds ago      Running             kube-scheduler              0                   3ff6a54bd9feb       kube-scheduler-old-k8s-version-824670            kube-system
	6eac65576e4bb       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           56 seconds ago      Running             etcd                        0                   8ea6b12444176       etcd-old-k8s-version-824670                      kube-system
	956a9b5c70fa4       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           56 seconds ago      Running             kube-apiserver              0                   627729fb077e5       kube-apiserver-old-k8s-version-824670            kube-system
	
	
	==> coredns [5de41c6aa492fe5e76f33d9b2461f6010074bf5e9cecb2e7aaa01d47eff17b90] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:43558 - 57662 "HINFO IN 4623689995966292170.478716781200727525. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.066209008s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-824670
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-824670
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300
	                    minikube.k8s.io/name=old-k8s-version-824670
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T20_08_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 20:08:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-824670
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 20:10:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 20:10:14 +0000   Fri, 12 Dec 2025 20:08:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 20:10:14 +0000   Fri, 12 Dec 2025 20:08:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 20:10:14 +0000   Fri, 12 Dec 2025 20:08:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 20:10:14 +0000   Fri, 12 Dec 2025 20:09:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-824670
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c9831a2813dcaeb3cc17b596693b7bac
	  System UUID:                1fb8fe54-c4b9-4491-b301-c9b4220778ba
	  Boot ID:                    50e046d9-33b0-4107-918f-f0a8d9513b10
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-5dd5756b68-shgbw                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     104s
	  kube-system                 etcd-old-k8s-version-824670                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         116s
	  kube-system                 kindnet-75qr9                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-old-k8s-version-824670             250m (3%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-controller-manager-old-k8s-version-824670    200m (2%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-nwrgl                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-old-k8s-version-824670             100m (1%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-lxjjc        0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-8xmbb             0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 103s               kube-proxy       
	  Normal  Starting                 53s                kube-proxy       
	  Normal  NodeHasSufficientMemory  116s               kubelet          Node old-k8s-version-824670 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s               kubelet          Node old-k8s-version-824670 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s               kubelet          Node old-k8s-version-824670 status is now: NodeHasSufficientPID
	  Normal  Starting                 116s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           104s               node-controller  Node old-k8s-version-824670 event: Registered Node old-k8s-version-824670 in Controller
	  Normal  NodeReady                91s                kubelet          Node old-k8s-version-824670 status is now: NodeReady
	  Normal  Starting                 57s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x8 over 57s)  kubelet          Node old-k8s-version-824670 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 57s)  kubelet          Node old-k8s-version-824670 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 57s)  kubelet          Node old-k8s-version-824670 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           41s                node-controller  Node old-k8s-version-824670 event: Registered Node old-k8s-version-824670 in Controller
	
	
	==> dmesg <==
	[  +0.086327] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026088] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.855140] kauditd_printk_skb: 47 callbacks suppressed
	[Dec12 19:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.016395] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023890] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023861] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023912] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023894] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +2.047754] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +4.031589] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +8.448131] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[Dec12 19:32] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[ +32.252714] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	
	
	==> etcd [6eac65576e4bbc186c5b79d6f0aa97f5d5234fb637be74a5ce5b44491b28bb54] <==
	{"level":"info","ts":"2025-12-12T20:09:41.548857Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-12T20:09:41.548929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 switched to configuration voters=(16125559238023404339)"}
	{"level":"info","ts":"2025-12-12T20:09:41.549013Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"]}
	{"level":"info","ts":"2025-12-12T20:09:41.549134Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-12T20:09:41.549176Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-12T20:09:41.551131Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-12T20:09:41.551207Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-12-12T20:09:41.55127Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-12-12T20:09:41.551525Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-12T20:09:41.551577Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-12T20:09:42.640383Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-12T20:09:42.640445Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-12T20:09:42.640461Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-12-12T20:09:42.640475Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2025-12-12T20:09:42.64048Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-12-12T20:09:42.640488Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2025-12-12T20:09:42.640494Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-12-12T20:09:42.641501Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-824670 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-12T20:09:42.641515Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-12T20:09:42.641538Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-12T20:09:42.641693Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-12T20:09:42.641717Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-12T20:09:42.642725Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-12T20:09:42.642764Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2025-12-12T20:10:34.723687Z","caller":"traceutil/trace.go:171","msg":"trace[1418872379] transaction","detail":"{read_only:false; response_revision:629; number_of_response:1; }","duration":"149.717092ms","start":"2025-12-12T20:10:34.573942Z","end":"2025-12-12T20:10:34.723659Z","steps":["trace[1418872379] 'process raft request'  (duration: 149.584404ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:10:37 up 53 min,  0 user,  load average: 2.85, 1.92, 1.46
	Linux old-k8s-version-824670 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2b7d114503f1e7c1f729cb4a9c42a05780b95035046d4e4ef6f086068d58d276] <==
	I1212 20:09:44.554001       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1212 20:09:44.577311       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1212 20:09:44.577438       1 main.go:148] setting mtu 1500 for CNI 
	I1212 20:09:44.577456       1 main.go:178] kindnetd IP family: "ipv4"
	I1212 20:09:44.577497       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-12T20:09:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1212 20:09:44.878786       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1212 20:09:44.949263       1 controller.go:381] "Waiting for informer caches to sync"
	I1212 20:09:44.949331       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1212 20:09:44.949530       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1212 20:09:45.177512       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1212 20:09:45.177544       1 metrics.go:72] Registering metrics
	I1212 20:09:45.177603       1 controller.go:711] "Syncing nftables rules"
	I1212 20:09:54.878515       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1212 20:09:54.878561       1 main.go:301] handling current node
	I1212 20:10:04.878538       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1212 20:10:04.878594       1 main.go:301] handling current node
	I1212 20:10:14.878518       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1212 20:10:14.878569       1 main.go:301] handling current node
	I1212 20:10:24.880407       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1212 20:10:24.880448       1 main.go:301] handling current node
	I1212 20:10:34.883298       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1212 20:10:34.883340       1 main.go:301] handling current node
	
	
	==> kube-apiserver [956a9b5c70fa4a99806a77c8333552c74d1d682dc9e545864ecd4d6ae67331a9] <==
	I1212 20:09:43.537152       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I1212 20:09:43.558646       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 20:09:43.602088       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1212 20:09:43.608900       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1212 20:09:43.608920       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1212 20:09:43.609217       1 shared_informer.go:318] Caches are synced for configmaps
	I1212 20:09:43.609257       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1212 20:09:43.609293       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1212 20:09:43.609430       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1212 20:09:43.637983       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1212 20:09:43.638025       1 aggregator.go:166] initial CRD sync complete...
	I1212 20:09:43.638034       1 autoregister_controller.go:141] Starting autoregister controller
	I1212 20:09:43.638041       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 20:09:43.638049       1 cache.go:39] Caches are synced for autoregister controller
	I1212 20:09:44.454883       1 controller.go:624] quota admission added evaluator for: namespaces
	I1212 20:09:44.482837       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1212 20:09:44.498842       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 20:09:44.505737       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 20:09:44.512060       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 20:09:44.513180       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1212 20:09:44.549345       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.147.212"}
	I1212 20:09:44.566441       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.185.64"}
	I1212 20:09:56.133785       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1212 20:09:56.179921       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 20:09:56.186818       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [9a180d91c2d49bf246e2537f6f6ec9383636af5bcd8e483965280f5e2ed16670] <==
	I1212 20:09:56.193058       1 shared_informer.go:318] Caches are synced for persistent volume
	I1212 20:09:56.199483       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.981849ms"
	I1212 20:09:56.199567       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="48.351µs"
	I1212 20:09:56.200991       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-8xmbb"
	I1212 20:09:56.201017       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-lxjjc"
	I1212 20:09:56.205872       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="18.353848ms"
	I1212 20:09:56.209376       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="21.77357ms"
	I1212 20:09:56.211268       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="5.353611ms"
	I1212 20:09:56.211367       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="42.156µs"
	I1212 20:09:56.213842       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="4.4255ms"
	I1212 20:09:56.213928       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="43.883µs"
	I1212 20:09:56.216026       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="34.343µs"
	I1212 20:09:56.224597       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="42.78µs"
	I1212 20:09:56.506602       1 shared_informer.go:318] Caches are synced for garbage collector
	I1212 20:09:56.583971       1 shared_informer.go:318] Caches are synced for garbage collector
	I1212 20:09:56.583999       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1212 20:10:01.097503       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.621803ms"
	I1212 20:10:01.097693       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="70.848µs"
	I1212 20:10:03.089460       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="56.162µs"
	I1212 20:10:04.095464       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="70.372µs"
	I1212 20:10:05.093947       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="103.445µs"
	I1212 20:10:18.041749       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.227449ms"
	I1212 20:10:18.041886       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.452µs"
	I1212 20:10:21.133883       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="94.187µs"
	I1212 20:10:26.519641       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="74.978µs"
	
	
	==> kube-proxy [791996c96c1570a58c5ea4f6aab56589666965c63b05afd3cf932d0e002d46bf] <==
	I1212 20:09:44.409382       1 server_others.go:69] "Using iptables proxy"
	I1212 20:09:44.420252       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1212 20:09:44.442025       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 20:09:44.444626       1 server_others.go:152] "Using iptables Proxier"
	I1212 20:09:44.444658       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1212 20:09:44.444668       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1212 20:09:44.444702       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 20:09:44.444941       1 server.go:846] "Version info" version="v1.28.0"
	I1212 20:09:44.444954       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 20:09:44.445734       1 config.go:97] "Starting endpoint slice config controller"
	I1212 20:09:44.445773       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 20:09:44.445807       1 config.go:188] "Starting service config controller"
	I1212 20:09:44.445812       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 20:09:44.446030       1 config.go:315] "Starting node config controller"
	I1212 20:09:44.446049       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 20:09:44.547542       1 shared_informer.go:318] Caches are synced for node config
	I1212 20:09:44.547551       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1212 20:09:44.547587       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [849ed107c3cf4dafcf63a6f35cbf26763c3ee90be82e68800ff7025351783d38] <==
	I1212 20:09:42.085417       1 serving.go:348] Generated self-signed cert in-memory
	W1212 20:09:43.535721       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1212 20:09:43.535828       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 20:09:43.535844       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1212 20:09:43.535854       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1212 20:09:43.563197       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1212 20:09:43.566326       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 20:09:43.569413       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 20:09:43.569474       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 20:09:43.572364       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1212 20:09:43.572433       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1212 20:09:43.669988       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 12 20:09:56 old-k8s-version-824670 kubelet[736]: I1212 20:09:56.208981     736 topology_manager.go:215] "Topology Admit Handler" podUID="85485575-d55f-4968-9740-35c3df94662b" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-8xmbb"
	Dec 12 20:09:56 old-k8s-version-824670 kubelet[736]: I1212 20:09:56.303727     736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6jtr\" (UniqueName: \"kubernetes.io/projected/e0d619f1-f49a-4034-a2f4-51b1cdcaae11-kube-api-access-k6jtr\") pod \"dashboard-metrics-scraper-5f989dc9cf-lxjjc\" (UID: \"e0d619f1-f49a-4034-a2f4-51b1cdcaae11\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lxjjc"
	Dec 12 20:09:56 old-k8s-version-824670 kubelet[736]: I1212 20:09:56.303772     736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/85485575-d55f-4968-9740-35c3df94662b-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-8xmbb\" (UID: \"85485575-d55f-4968-9740-35c3df94662b\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-8xmbb"
	Dec 12 20:09:56 old-k8s-version-824670 kubelet[736]: I1212 20:09:56.303798     736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5d2g\" (UniqueName: \"kubernetes.io/projected/85485575-d55f-4968-9740-35c3df94662b-kube-api-access-d5d2g\") pod \"kubernetes-dashboard-8694d4445c-8xmbb\" (UID: \"85485575-d55f-4968-9740-35c3df94662b\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-8xmbb"
	Dec 12 20:09:56 old-k8s-version-824670 kubelet[736]: I1212 20:09:56.303887     736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e0d619f1-f49a-4034-a2f4-51b1cdcaae11-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-lxjjc\" (UID: \"e0d619f1-f49a-4034-a2f4-51b1cdcaae11\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lxjjc"
	Dec 12 20:10:01 old-k8s-version-824670 kubelet[736]: I1212 20:10:01.083376     736 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-8xmbb" podStartSLOduration=0.965248545 podCreationTimestamp="2025-12-12 20:09:56 +0000 UTC" firstStartedPulling="2025-12-12 20:09:56.531576443 +0000 UTC m=+15.612903510" lastFinishedPulling="2025-12-12 20:10:00.649640146 +0000 UTC m=+19.730967217" observedRunningTime="2025-12-12 20:10:01.082124109 +0000 UTC m=+20.163451180" watchObservedRunningTime="2025-12-12 20:10:01.083312252 +0000 UTC m=+20.164639326"
	Dec 12 20:10:03 old-k8s-version-824670 kubelet[736]: I1212 20:10:03.076295     736 scope.go:117] "RemoveContainer" containerID="2b7485f50c717e793408ba234982a40874bcd761bb998b74dc2494c3641d8f93"
	Dec 12 20:10:04 old-k8s-version-824670 kubelet[736]: I1212 20:10:04.081180     736 scope.go:117] "RemoveContainer" containerID="2b7485f50c717e793408ba234982a40874bcd761bb998b74dc2494c3641d8f93"
	Dec 12 20:10:04 old-k8s-version-824670 kubelet[736]: I1212 20:10:04.081422     736 scope.go:117] "RemoveContainer" containerID="ff61be8c0d949c0126ec6ce146b85226100e94bf6dcf66f97fff1bb2a3e045f6"
	Dec 12 20:10:04 old-k8s-version-824670 kubelet[736]: E1212 20:10:04.081799     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-lxjjc_kubernetes-dashboard(e0d619f1-f49a-4034-a2f4-51b1cdcaae11)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lxjjc" podUID="e0d619f1-f49a-4034-a2f4-51b1cdcaae11"
	Dec 12 20:10:05 old-k8s-version-824670 kubelet[736]: I1212 20:10:05.084950     736 scope.go:117] "RemoveContainer" containerID="ff61be8c0d949c0126ec6ce146b85226100e94bf6dcf66f97fff1bb2a3e045f6"
	Dec 12 20:10:05 old-k8s-version-824670 kubelet[736]: E1212 20:10:05.085293     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-lxjjc_kubernetes-dashboard(e0d619f1-f49a-4034-a2f4-51b1cdcaae11)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lxjjc" podUID="e0d619f1-f49a-4034-a2f4-51b1cdcaae11"
	Dec 12 20:10:06 old-k8s-version-824670 kubelet[736]: I1212 20:10:06.509216     736 scope.go:117] "RemoveContainer" containerID="ff61be8c0d949c0126ec6ce146b85226100e94bf6dcf66f97fff1bb2a3e045f6"
	Dec 12 20:10:06 old-k8s-version-824670 kubelet[736]: E1212 20:10:06.509528     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-lxjjc_kubernetes-dashboard(e0d619f1-f49a-4034-a2f4-51b1cdcaae11)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lxjjc" podUID="e0d619f1-f49a-4034-a2f4-51b1cdcaae11"
	Dec 12 20:10:15 old-k8s-version-824670 kubelet[736]: I1212 20:10:15.105737     736 scope.go:117] "RemoveContainer" containerID="74f2f8fee475f6c8156e8874b05736c6e859ff4488ac0eef026e65fab8b4755e"
	Dec 12 20:10:21 old-k8s-version-824670 kubelet[736]: I1212 20:10:21.002180     736 scope.go:117] "RemoveContainer" containerID="ff61be8c0d949c0126ec6ce146b85226100e94bf6dcf66f97fff1bb2a3e045f6"
	Dec 12 20:10:21 old-k8s-version-824670 kubelet[736]: I1212 20:10:21.122520     736 scope.go:117] "RemoveContainer" containerID="ff61be8c0d949c0126ec6ce146b85226100e94bf6dcf66f97fff1bb2a3e045f6"
	Dec 12 20:10:21 old-k8s-version-824670 kubelet[736]: I1212 20:10:21.122770     736 scope.go:117] "RemoveContainer" containerID="383f605274c473be6263ace19467c98d6a96fad804f4f87b75e17354e7e18b9d"
	Dec 12 20:10:21 old-k8s-version-824670 kubelet[736]: E1212 20:10:21.123139     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-lxjjc_kubernetes-dashboard(e0d619f1-f49a-4034-a2f4-51b1cdcaae11)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lxjjc" podUID="e0d619f1-f49a-4034-a2f4-51b1cdcaae11"
	Dec 12 20:10:26 old-k8s-version-824670 kubelet[736]: I1212 20:10:26.509034     736 scope.go:117] "RemoveContainer" containerID="383f605274c473be6263ace19467c98d6a96fad804f4f87b75e17354e7e18b9d"
	Dec 12 20:10:26 old-k8s-version-824670 kubelet[736]: E1212 20:10:26.509456     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-lxjjc_kubernetes-dashboard(e0d619f1-f49a-4034-a2f4-51b1cdcaae11)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lxjjc" podUID="e0d619f1-f49a-4034-a2f4-51b1cdcaae11"
	Dec 12 20:10:32 old-k8s-version-824670 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 12 20:10:32 old-k8s-version-824670 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 12 20:10:32 old-k8s-version-824670 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:10:32 old-k8s-version-824670 systemd[1]: kubelet.service: Consumed 1.417s CPU time.
	
	
	==> kubernetes-dashboard [bf67adb0b535c4f80332ee7cbb048fb485b60b71283dd19817a84c6c40a9acf6] <==
	2025/12/12 20:10:00 Starting overwatch
	2025/12/12 20:10:00 Using namespace: kubernetes-dashboard
	2025/12/12 20:10:00 Using in-cluster config to connect to apiserver
	2025/12/12 20:10:00 Using secret token for csrf signing
	2025/12/12 20:10:00 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/12 20:10:00 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/12 20:10:00 Successful initial request to the apiserver, version: v1.28.0
	2025/12/12 20:10:00 Generating JWE encryption key
	2025/12/12 20:10:00 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/12 20:10:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/12 20:10:01 Initializing JWE encryption key from synchronized object
	2025/12/12 20:10:01 Creating in-cluster Sidecar client
	2025/12/12 20:10:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/12 20:10:01 Serving insecurely on HTTP port: 9090
	2025/12/12 20:10:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [74f2f8fee475f6c8156e8874b05736c6e859ff4488ac0eef026e65fab8b4755e] <==
	I1212 20:09:44.384386       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1212 20:10:14.387707       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [d1429ec9716928945e4999afe1474e549e2f08279150e50974bc3e09ff1158bd] <==
	I1212 20:10:15.151518       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 20:10:15.158699       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 20:10:15.158730       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 20:10:32.557683       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 20:10:32.557877       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-824670_8ad85e3b-a8c0-4324-8ac8-d350da61f618!
	I1212 20:10:32.557878       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f695fbd6-0ef5-496c-8640-6e2ff454cd84", APIVersion:"v1", ResourceVersion:"627", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-824670_8ad85e3b-a8c0-4324-8ac8-d350da61f618 became leader
	I1212 20:10:32.658032       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-824670_8ad85e3b-a8c0-4324-8ac8-d350da61f618!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-824670 -n old-k8s-version-824670
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-824670 -n old-k8s-version-824670: exit status 2 (307.86257ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-824670 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (7.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-753103 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-753103 --alsologtostderr -v=1: exit status 80 (2.619141244s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-753103 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:10:45.111689  296616 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:10:45.118917  296616 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:10:45.118937  296616 out.go:374] Setting ErrFile to fd 2...
	I1212 20:10:45.118944  296616 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:10:45.119246  296616 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 20:10:45.128211  296616 out.go:368] Setting JSON to false
	I1212 20:10:45.128236  296616 mustload.go:66] Loading cluster: no-preload-753103
	I1212 20:10:45.128830  296616 config.go:182] Loaded profile config "no-preload-753103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:10:45.129414  296616 cli_runner.go:164] Run: docker container inspect no-preload-753103 --format={{.State.Status}}
	I1212 20:10:45.148028  296616 host.go:66] Checking if "no-preload-753103" exists ...
	I1212 20:10:45.148290  296616 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:10:45.206133  296616 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:94 SystemTime:2025-12-12 20:10:45.196559312 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 20:10:45.206841  296616 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22112/minikube-v1.37.0-1765505725-22112-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765505725-22112/minikube-v1.37.0-1765505725-22112-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765505725-22112-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-753103 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1212 20:10:45.217599  296616 out.go:179] * Pausing node no-preload-753103 ... 
	I1212 20:10:45.219060  296616 host.go:66] Checking if "no-preload-753103" exists ...
	I1212 20:10:45.219414  296616 ssh_runner.go:195] Run: systemctl --version
	I1212 20:10:45.219472  296616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-753103
	I1212 20:10:45.237702  296616 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/no-preload-753103/id_rsa Username:docker}
	I1212 20:10:45.334969  296616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:10:45.348752  296616 pause.go:52] kubelet running: true
	I1212 20:10:45.348827  296616 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 20:10:45.567627  296616 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 20:10:45.567717  296616 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 20:10:45.643079  296616 cri.go:89] found id: "a853da2b8ba925006c9ac1b26606e6847abdb752cb7caedb7f1e059755fdab37"
	I1212 20:10:45.643104  296616 cri.go:89] found id: "c0fdb2aafae83a7764a44b93a09f4e725a31b95478d233fb8585e31d03e106f5"
	I1212 20:10:45.643110  296616 cri.go:89] found id: "1da232930b1225ba26cc77335c5fd77023b588fafd9332d44b052afc26a6740d"
	I1212 20:10:45.643116  296616 cri.go:89] found id: "87e1fd79ab4c6e57e3cb839d5f0fa3669a8136cd2e22b70a224ec70cb69bc6d0"
	I1212 20:10:45.643121  296616 cri.go:89] found id: "8694fb568f6184f280ef0979168c88307d2d2ce6abadf548201dab5907b1dec2"
	I1212 20:10:45.643127  296616 cri.go:89] found id: "0de13181907744cb32a821b90949248f3f382280f37f0ac21d7a4e83b8b9f488"
	I1212 20:10:45.643133  296616 cri.go:89] found id: "452a3991e4df436dd9d2ad0b08c3ffa20c78ded9ad019978d64bd40f23d993a8"
	I1212 20:10:45.643138  296616 cri.go:89] found id: "3286e3a6497804378907ab37416b64a0519732034946847c95152a1d59829cc2"
	I1212 20:10:45.643142  296616 cri.go:89] found id: "249b72d350355577980958226dcfac379cd22975003283e5e7acd74458648cfc"
	I1212 20:10:45.643151  296616 cri.go:89] found id: "93292299ed71aee8074161caf32dc608cef3f51f1addaf73da6ffe773de2495f"
	I1212 20:10:45.643159  296616 cri.go:89] found id: "163acd72ae86023a3eae1b09074158d0b11755431dd837cc567bffd051dfb67d"
	I1212 20:10:45.643164  296616 cri.go:89] found id: ""
	I1212 20:10:45.643218  296616 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 20:10:45.655231  296616 retry.go:31] will retry after 252.041717ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:10:45Z" level=error msg="open /run/runc: no such file or directory"
	I1212 20:10:45.907705  296616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:10:45.933074  296616 pause.go:52] kubelet running: false
	I1212 20:10:45.933144  296616 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 20:10:46.108735  296616 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 20:10:46.108837  296616 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 20:10:46.179993  296616 cri.go:89] found id: "a853da2b8ba925006c9ac1b26606e6847abdb752cb7caedb7f1e059755fdab37"
	I1212 20:10:46.180014  296616 cri.go:89] found id: "c0fdb2aafae83a7764a44b93a09f4e725a31b95478d233fb8585e31d03e106f5"
	I1212 20:10:46.180020  296616 cri.go:89] found id: "1da232930b1225ba26cc77335c5fd77023b588fafd9332d44b052afc26a6740d"
	I1212 20:10:46.180025  296616 cri.go:89] found id: "87e1fd79ab4c6e57e3cb839d5f0fa3669a8136cd2e22b70a224ec70cb69bc6d0"
	I1212 20:10:46.180029  296616 cri.go:89] found id: "8694fb568f6184f280ef0979168c88307d2d2ce6abadf548201dab5907b1dec2"
	I1212 20:10:46.180035  296616 cri.go:89] found id: "0de13181907744cb32a821b90949248f3f382280f37f0ac21d7a4e83b8b9f488"
	I1212 20:10:46.180039  296616 cri.go:89] found id: "452a3991e4df436dd9d2ad0b08c3ffa20c78ded9ad019978d64bd40f23d993a8"
	I1212 20:10:46.180043  296616 cri.go:89] found id: "3286e3a6497804378907ab37416b64a0519732034946847c95152a1d59829cc2"
	I1212 20:10:46.180046  296616 cri.go:89] found id: "249b72d350355577980958226dcfac379cd22975003283e5e7acd74458648cfc"
	I1212 20:10:46.180068  296616 cri.go:89] found id: "93292299ed71aee8074161caf32dc608cef3f51f1addaf73da6ffe773de2495f"
	I1212 20:10:46.180074  296616 cri.go:89] found id: "163acd72ae86023a3eae1b09074158d0b11755431dd837cc567bffd051dfb67d"
	I1212 20:10:46.180077  296616 cri.go:89] found id: ""
	I1212 20:10:46.180114  296616 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 20:10:46.192399  296616 retry.go:31] will retry after 243.299512ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:10:46Z" level=error msg="open /run/runc: no such file or directory"
	I1212 20:10:46.436945  296616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:10:46.451409  296616 pause.go:52] kubelet running: false
	I1212 20:10:46.451466  296616 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 20:10:46.610484  296616 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 20:10:46.610558  296616 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 20:10:46.675380  296616 cri.go:89] found id: "a853da2b8ba925006c9ac1b26606e6847abdb752cb7caedb7f1e059755fdab37"
	I1212 20:10:46.675407  296616 cri.go:89] found id: "c0fdb2aafae83a7764a44b93a09f4e725a31b95478d233fb8585e31d03e106f5"
	I1212 20:10:46.675413  296616 cri.go:89] found id: "1da232930b1225ba26cc77335c5fd77023b588fafd9332d44b052afc26a6740d"
	I1212 20:10:46.675418  296616 cri.go:89] found id: "87e1fd79ab4c6e57e3cb839d5f0fa3669a8136cd2e22b70a224ec70cb69bc6d0"
	I1212 20:10:46.675423  296616 cri.go:89] found id: "8694fb568f6184f280ef0979168c88307d2d2ce6abadf548201dab5907b1dec2"
	I1212 20:10:46.675427  296616 cri.go:89] found id: "0de13181907744cb32a821b90949248f3f382280f37f0ac21d7a4e83b8b9f488"
	I1212 20:10:46.675431  296616 cri.go:89] found id: "452a3991e4df436dd9d2ad0b08c3ffa20c78ded9ad019978d64bd40f23d993a8"
	I1212 20:10:46.675435  296616 cri.go:89] found id: "3286e3a6497804378907ab37416b64a0519732034946847c95152a1d59829cc2"
	I1212 20:10:46.675439  296616 cri.go:89] found id: "249b72d350355577980958226dcfac379cd22975003283e5e7acd74458648cfc"
	I1212 20:10:46.675456  296616 cri.go:89] found id: "93292299ed71aee8074161caf32dc608cef3f51f1addaf73da6ffe773de2495f"
	I1212 20:10:46.675461  296616 cri.go:89] found id: "163acd72ae86023a3eae1b09074158d0b11755431dd837cc567bffd051dfb67d"
	I1212 20:10:46.675466  296616 cri.go:89] found id: ""
	I1212 20:10:46.675512  296616 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 20:10:46.689663  296616 retry.go:31] will retry after 480.904622ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:10:46Z" level=error msg="open /run/runc: no such file or directory"
	I1212 20:10:47.171009  296616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:10:47.184659  296616 pause.go:52] kubelet running: false
	I1212 20:10:47.184721  296616 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 20:10:47.331491  296616 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 20:10:47.331579  296616 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 20:10:47.396354  296616 cri.go:89] found id: "a853da2b8ba925006c9ac1b26606e6847abdb752cb7caedb7f1e059755fdab37"
	I1212 20:10:47.396375  296616 cri.go:89] found id: "c0fdb2aafae83a7764a44b93a09f4e725a31b95478d233fb8585e31d03e106f5"
	I1212 20:10:47.396379  296616 cri.go:89] found id: "1da232930b1225ba26cc77335c5fd77023b588fafd9332d44b052afc26a6740d"
	I1212 20:10:47.396383  296616 cri.go:89] found id: "87e1fd79ab4c6e57e3cb839d5f0fa3669a8136cd2e22b70a224ec70cb69bc6d0"
	I1212 20:10:47.396386  296616 cri.go:89] found id: "8694fb568f6184f280ef0979168c88307d2d2ce6abadf548201dab5907b1dec2"
	I1212 20:10:47.396390  296616 cri.go:89] found id: "0de13181907744cb32a821b90949248f3f382280f37f0ac21d7a4e83b8b9f488"
	I1212 20:10:47.396392  296616 cri.go:89] found id: "452a3991e4df436dd9d2ad0b08c3ffa20c78ded9ad019978d64bd40f23d993a8"
	I1212 20:10:47.396395  296616 cri.go:89] found id: "3286e3a6497804378907ab37416b64a0519732034946847c95152a1d59829cc2"
	I1212 20:10:47.396397  296616 cri.go:89] found id: "249b72d350355577980958226dcfac379cd22975003283e5e7acd74458648cfc"
	I1212 20:10:47.396409  296616 cri.go:89] found id: "93292299ed71aee8074161caf32dc608cef3f51f1addaf73da6ffe773de2495f"
	I1212 20:10:47.396412  296616 cri.go:89] found id: "163acd72ae86023a3eae1b09074158d0b11755431dd837cc567bffd051dfb67d"
	I1212 20:10:47.396415  296616 cri.go:89] found id: ""
	I1212 20:10:47.396452  296616 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 20:10:47.473792  296616 out.go:203] 
	W1212 20:10:47.553505  296616 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:10:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:10:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 20:10:47.553551  296616 out.go:285] * 
	* 
	W1212 20:10:47.557414  296616 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:10:47.629898  296616 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-753103 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-753103
helpers_test.go:244: (dbg) docker inspect no-preload-753103:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "452e89832e40aed0dcc87fb5c8d33b609854c2becd1774e2a09d3c6d345e07dd",
	        "Created": "2025-12-12T20:08:31.941720816Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 281502,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T20:09:45.308654013Z",
	            "FinishedAt": "2025-12-12T20:09:44.412072699Z"
	        },
	        "Image": "sha256:1ca69fb46d667873e297ff8975852d6be818eb624529e750e2b09ff0d3b0c367",
	        "ResolvConfPath": "/var/lib/docker/containers/452e89832e40aed0dcc87fb5c8d33b609854c2becd1774e2a09d3c6d345e07dd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/452e89832e40aed0dcc87fb5c8d33b609854c2becd1774e2a09d3c6d345e07dd/hostname",
	        "HostsPath": "/var/lib/docker/containers/452e89832e40aed0dcc87fb5c8d33b609854c2becd1774e2a09d3c6d345e07dd/hosts",
	        "LogPath": "/var/lib/docker/containers/452e89832e40aed0dcc87fb5c8d33b609854c2becd1774e2a09d3c6d345e07dd/452e89832e40aed0dcc87fb5c8d33b609854c2becd1774e2a09d3c6d345e07dd-json.log",
	        "Name": "/no-preload-753103",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-753103:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-753103",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "452e89832e40aed0dcc87fb5c8d33b609854c2becd1774e2a09d3c6d345e07dd",
	                "LowerDir": "/var/lib/docker/overlay2/520211cd2383e798b47ab216c7c60903d51535a3971d5244a2d0383f153e65e5-init/diff:/var/lib/docker/overlay2/87d8f1d6be832394c20ec55eb084b3f4a084ca786856584bd9e90cdb45432d3c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/520211cd2383e798b47ab216c7c60903d51535a3971d5244a2d0383f153e65e5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/520211cd2383e798b47ab216c7c60903d51535a3971d5244a2d0383f153e65e5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/520211cd2383e798b47ab216c7c60903d51535a3971d5244a2d0383f153e65e5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-753103",
	                "Source": "/var/lib/docker/volumes/no-preload-753103/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-753103",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-753103",
	                "name.minikube.sigs.k8s.io": "no-preload-753103",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "6b77079e574a8811cf1119bfd423c70cef66dd83914d25dba4759248caed172d",
	            "SandboxKey": "/var/run/docker/netns/6b77079e574a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-753103": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c5f00e9d4498c5a5c29031e27e31d73fb062b781edd69002a7dac693e0d7a335",
	                    "EndpointID": "0c7e13672c05dae95b8886235385614ba264bf49fe3354f74a09acbe06a644f3",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "ce:6b:03:8d:e9:d5",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-753103",
	                        "452e89832e40"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-753103 -n no-preload-753103
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-753103 -n no-preload-753103: exit status 2 (308.76064ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-753103 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-753103 logs -n 25: (1.631316507s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ start   │ -p cert-expiration-070436 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                            │ cert-expiration-070436       │ jenkins │ v1.37.0 │ 12 Dec 25 20:08 UTC │ 12 Dec 25 20:08 UTC │
	│ delete  │ -p cert-expiration-070436                                                                                                                                                                                                                            │ cert-expiration-070436       │ jenkins │ v1.37.0 │ 12 Dec 25 20:08 UTC │ 12 Dec 25 20:08 UTC │
	│ start   │ -p no-preload-753103 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:08 UTC │ 12 Dec 25 20:09 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-824670 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │                     │
	│ stop    │ -p old-k8s-version-824670 --alsologtostderr -v=3                                                                                                                                                                                                     │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:09 UTC │
	│ addons  │ enable metrics-server -p no-preload-753103 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │                     │
	│ stop    │ -p no-preload-753103 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:09 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-824670 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:09 UTC │
	│ start   │ -p old-k8s-version-824670 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:10 UTC │
	│ addons  │ enable dashboard -p no-preload-753103 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:09 UTC │
	│ start   │ -p no-preload-753103 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:10 UTC │
	│ delete  │ -p stopped-upgrade-180826                                                                                                                                                                                                                            │ stopped-upgrade-180826       │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ start   │ -p kubernetes-upgrade-991615 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                    │ kubernetes-upgrade-991615    │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ start   │ -p kubernetes-upgrade-991615 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-991615    │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ start   │ -p default-k8s-diff-port-433034 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-433034 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ image   │ old-k8s-version-824670 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ pause   │ -p old-k8s-version-824670 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-991615                                                                                                                                                                                                                         │ kubernetes-upgrade-991615    │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ delete  │ -p old-k8s-version-824670                                                                                                                                                                                                                            │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ start   │ -p newest-cni-832562 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-832562            │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ delete  │ -p old-k8s-version-824670                                                                                                                                                                                                                            │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ delete  │ -p disable-driver-mounts-044739                                                                                                                                                                                                                      │ disable-driver-mounts-044739 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ start   │ -p embed-certs-399565 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-399565           │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ image   │ no-preload-753103 image list --format=json                                                                                                                                                                                                           │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ pause   │ -p no-preload-753103 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:10:41
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:10:41.692529  295304 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:10:41.692832  295304 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:10:41.692845  295304 out.go:374] Setting ErrFile to fd 2...
	I1212 20:10:41.692853  295304 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:10:41.693166  295304 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 20:10:41.693721  295304 out.go:368] Setting JSON to false
	I1212 20:10:41.694958  295304 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3189,"bootTime":1765567053,"procs":280,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 20:10:41.695027  295304 start.go:143] virtualization: kvm guest
	I1212 20:10:41.697036  295304 out.go:179] * [embed-certs-399565] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 20:10:41.698462  295304 notify.go:221] Checking for updates...
	I1212 20:10:41.698482  295304 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:10:41.699614  295304 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:10:41.701198  295304 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 20:10:41.702501  295304 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-5703/.minikube
	I1212 20:10:41.703721  295304 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 20:10:41.706914  295304 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:10:41.708655  295304 config.go:182] Loaded profile config "default-k8s-diff-port-433034": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:10:41.708809  295304 config.go:182] Loaded profile config "newest-cni-832562": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:10:41.708941  295304 config.go:182] Loaded profile config "no-preload-753103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:10:41.709162  295304 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:10:41.736375  295304 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1212 20:10:41.736489  295304 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:10:41.805184  295304 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:82 SystemTime:2025-12-12 20:10:41.794085924 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 20:10:41.805291  295304 docker.go:319] overlay module found
	I1212 20:10:41.806950  295304 out.go:179] * Using the docker driver based on user configuration
	I1212 20:10:41.808086  295304 start.go:309] selected driver: docker
	I1212 20:10:41.808117  295304 start.go:927] validating driver "docker" against <nil>
	I1212 20:10:41.808130  295304 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:10:41.808903  295304 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:10:41.879218  295304 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:82 SystemTime:2025-12-12 20:10:41.868966394 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 20:10:41.879408  295304 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 20:10:41.879609  295304 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 20:10:41.881378  295304 out.go:179] * Using Docker driver with root privileges
	I1212 20:10:41.882511  295304 cni.go:84] Creating CNI manager for ""
	I1212 20:10:41.882610  295304 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:10:41.882625  295304 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 20:10:41.882687  295304 start.go:353] cluster config:
	{Name:embed-certs-399565 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-399565 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:10:41.884011  295304 out.go:179] * Starting "embed-certs-399565" primary control-plane node in "embed-certs-399565" cluster
	I1212 20:10:41.884959  295304 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 20:10:41.886157  295304 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 20:10:41.887252  295304 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:10:41.887293  295304 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1212 20:10:41.887304  295304 cache.go:65] Caching tarball of preloaded images
	I1212 20:10:41.887360  295304 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 20:10:41.887384  295304 preload.go:238] Found /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 20:10:41.887394  295304 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 20:10:41.887495  295304 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/embed-certs-399565/config.json ...
	I1212 20:10:41.887520  295304 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/embed-certs-399565/config.json: {Name:mk930d7a15f7dbca00bf49663208fb3e1c8a9b63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:41.908425  295304 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 20:10:41.908454  295304 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 20:10:41.908474  295304 cache.go:243] Successfully downloaded all kic artifacts
	I1212 20:10:41.908507  295304 start.go:360] acquireMachinesLock for embed-certs-399565: {Name:mk1cab5bf8b327e3a1e1090095b68f2974d5f79b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:10:41.908618  295304 start.go:364] duration metric: took 89.092µs to acquireMachinesLock for "embed-certs-399565"
	I1212 20:10:41.908647  295304 start.go:93] Provisioning new machine with config: &{Name:embed-certs-399565 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-399565 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:10:41.908725  295304 start.go:125] createHost starting for "" (driver="docker")
	I1212 20:10:41.571612  289770 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-433034 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:10:41.591935  289770 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1212 20:10:41.596423  289770 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:10:41.609968  289770 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-433034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-433034 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 20:10:41.610111  289770 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:10:41.610194  289770 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:10:41.647065  289770 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:10:41.647086  289770 crio.go:433] Images already preloaded, skipping extraction
	I1212 20:10:41.647127  289770 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:10:41.674477  289770 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:10:41.674503  289770 cache_images.go:86] Images are preloaded, skipping loading
	I1212 20:10:41.674512  289770 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.2 crio true true} ...
	I1212 20:10:41.674617  289770 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-433034 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-433034 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 20:10:41.674696  289770 ssh_runner.go:195] Run: crio config
	I1212 20:10:41.727974  289770 cni.go:84] Creating CNI manager for ""
	I1212 20:10:41.728000  289770 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:10:41.728020  289770 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 20:10:41.728050  289770 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-433034 NodeName:default-k8s-diff-port-433034 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:10:41.728223  289770 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-433034"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:10:41.728304  289770 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 20:10:41.736985  289770 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 20:10:41.737042  289770 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 20:10:41.745683  289770 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1212 20:10:41.760522  289770 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 20:10:41.782567  289770 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1212 20:10:41.799056  289770 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1212 20:10:41.803567  289770 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:10:41.815701  289770 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:10:41.922909  289770 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:10:41.950098  289770 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034 for IP: 192.168.103.2
	I1212 20:10:41.950121  289770 certs.go:195] generating shared ca certs ...
	I1212 20:10:41.950143  289770 certs.go:227] acquiring lock for ca certs: {Name:mk2b2b547b64ae18c808f2dedd3d3cb8fa4a59ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:41.950324  289770 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-5703/.minikube/ca.key
	I1212 20:10:41.950391  289770 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.key
	I1212 20:10:41.950406  289770 certs.go:257] generating profile certs ...
	I1212 20:10:41.950459  289770 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/client.key
	I1212 20:10:41.950479  289770 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/client.crt with IP's: []
	I1212 20:10:41.978857  289770 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/client.crt ...
	I1212 20:10:41.978887  289770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/client.crt: {Name:mk2bf73a6340ccd36d94f4e2152bd3802b73c6fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:41.979052  289770 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/client.key ...
	I1212 20:10:41.979072  289770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/client.key: {Name:mk06acdffbf93d44bcb7e25a2047d506206b8423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:41.979193  289770 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/apiserver.key.65d27d78
	I1212 20:10:41.979217  289770 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/apiserver.crt.65d27d78 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1212 20:10:42.069552  289770 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/apiserver.crt.65d27d78 ...
	I1212 20:10:42.069585  289770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/apiserver.crt.65d27d78: {Name:mk954e68adcec24c91161e9771f465062205d1d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:42.069819  289770 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/apiserver.key.65d27d78 ...
	I1212 20:10:42.069843  289770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/apiserver.key.65d27d78: {Name:mk67a622c1f12d08eedf35004a6ad825a0644108 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:42.069966  289770 certs.go:382] copying /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/apiserver.crt.65d27d78 -> /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/apiserver.crt
	I1212 20:10:42.070080  289770 certs.go:386] copying /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/apiserver.key.65d27d78 -> /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/apiserver.key
	I1212 20:10:42.070170  289770 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/proxy-client.key
	I1212 20:10:42.070188  289770 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/proxy-client.crt with IP's: []
	I1212 20:10:42.314746  289770 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/proxy-client.crt ...
	I1212 20:10:42.314777  289770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/proxy-client.crt: {Name:mk94d580659e7189bf2baf23d2f8504f8aef4985 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:42.374361  289770 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/proxy-client.key ...
	I1212 20:10:42.374391  289770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/proxy-client.key: {Name:mkb5065b604b0aa12be72c40f05b38b18f8204b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:42.374653  289770 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/9254.pem (1338 bytes)
	W1212 20:10:42.374703  289770 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-5703/.minikube/certs/9254_empty.pem, impossibly tiny 0 bytes
	I1212 20:10:42.374720  289770 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 20:10:42.374747  289770 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem (1078 bytes)
	I1212 20:10:42.374774  289770 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:10:42.374809  289770 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/key.pem (1679 bytes)
	I1212 20:10:42.374869  289770 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/ssl/certs/92542.pem (1708 bytes)
	I1212 20:10:42.375596  289770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:10:42.396082  289770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 20:10:42.413213  289770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:10:42.430418  289770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 20:10:42.447205  289770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1212 20:10:42.463675  289770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I1212 20:10:42.481534  289770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:10:42.498588  289770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 20:10:42.515082  289770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:10:42.540720  289770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/certs/9254.pem --> /usr/share/ca-certificates/9254.pem (1338 bytes)
	I1212 20:10:42.558289  289770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/ssl/certs/92542.pem --> /usr/share/ca-certificates/92542.pem (1708 bytes)
	I1212 20:10:42.574960  289770 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 20:10:42.587452  289770 ssh_runner.go:195] Run: openssl version
	I1212 20:10:42.593439  289770 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:10:42.600668  289770 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 20:10:42.608626  289770 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:10:42.612218  289770 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:10:42.612268  289770 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:10:42.646424  289770 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 20:10:42.654255  289770 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 20:10:42.661698  289770 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9254.pem
	I1212 20:10:42.669000  289770 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9254.pem /etc/ssl/certs/9254.pem
	I1212 20:10:42.678662  289770 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9254.pem
	I1212 20:10:42.682370  289770 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:38 /usr/share/ca-certificates/9254.pem
	I1212 20:10:42.682433  289770 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9254.pem
	I1212 20:10:42.717398  289770 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 20:10:42.725365  289770 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9254.pem /etc/ssl/certs/51391683.0
	I1212 20:10:42.732780  289770 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/92542.pem
	I1212 20:10:42.740098  289770 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/92542.pem /etc/ssl/certs/92542.pem
	I1212 20:10:42.747566  289770 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92542.pem
	I1212 20:10:42.751169  289770 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:38 /usr/share/ca-certificates/92542.pem
	I1212 20:10:42.751221  289770 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92542.pem
	I1212 20:10:42.794427  289770 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 20:10:42.803760  289770 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/92542.pem /etc/ssl/certs/3ec20f2e.0
	I1212 20:10:42.812167  289770 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:10:42.816546  289770 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 20:10:42.816607  289770 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-433034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-433034 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:10:42.816672  289770 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:10:42.816714  289770 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:10:42.844343  289770 cri.go:89] found id: ""
	I1212 20:10:42.844421  289770 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 20:10:42.852381  289770 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 20:10:42.859963  289770 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 20:10:42.860006  289770 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:10:42.867465  289770 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 20:10:42.867487  289770 kubeadm.go:158] found existing configuration files:
	
	I1212 20:10:42.867531  289770 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1212 20:10:42.874804  289770 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 20:10:42.874861  289770 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 20:10:42.881738  289770 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1212 20:10:42.888789  289770 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 20:10:42.888834  289770 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:10:42.895637  289770 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1212 20:10:42.903340  289770 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 20:10:42.903390  289770 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:10:42.912496  289770 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1212 20:10:42.920011  289770 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 20:10:42.920060  289770 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:10:42.927216  289770 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 20:10:42.971358  289770 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1212 20:10:42.971451  289770 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 20:10:42.991280  289770 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 20:10:42.991382  289770 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1212 20:10:42.991443  289770 kubeadm.go:319] OS: Linux
	I1212 20:10:42.991528  289770 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 20:10:42.991593  289770 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 20:10:42.991663  289770 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 20:10:42.991730  289770 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 20:10:42.991809  289770 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 20:10:42.991878  289770 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 20:10:42.991933  289770 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 20:10:42.992013  289770 kubeadm.go:319] CGROUPS_IO: enabled
	I1212 20:10:43.054439  289770 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 20:10:43.054589  289770 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 20:10:43.054748  289770 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 20:10:43.062502  289770 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 20:10:39.992548  294089 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1212 20:10:39.992827  294089 start.go:159] libmachine.API.Create for "newest-cni-832562" (driver="docker")
	I1212 20:10:39.992864  294089 client.go:173] LocalClient.Create starting
	I1212 20:10:39.992942  294089 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem
	I1212 20:10:39.992996  294089 main.go:143] libmachine: Decoding PEM data...
	I1212 20:10:39.993023  294089 main.go:143] libmachine: Parsing certificate...
	I1212 20:10:39.993116  294089 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem
	I1212 20:10:39.993153  294089 main.go:143] libmachine: Decoding PEM data...
	I1212 20:10:39.993167  294089 main.go:143] libmachine: Parsing certificate...
	I1212 20:10:39.993579  294089 cli_runner.go:164] Run: docker network inspect newest-cni-832562 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 20:10:40.012789  294089 cli_runner.go:211] docker network inspect newest-cni-832562 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 20:10:40.012882  294089 network_create.go:284] running [docker network inspect newest-cni-832562] to gather additional debugging logs...
	I1212 20:10:40.012910  294089 cli_runner.go:164] Run: docker network inspect newest-cni-832562
	W1212 20:10:40.030825  294089 cli_runner.go:211] docker network inspect newest-cni-832562 returned with exit code 1
	I1212 20:10:40.030857  294089 network_create.go:287] error running [docker network inspect newest-cni-832562]: docker network inspect newest-cni-832562: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-832562 not found
	I1212 20:10:40.030871  294089 network_create.go:289] output of [docker network inspect newest-cni-832562]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-832562 not found
	
	** /stderr **
	I1212 20:10:40.031082  294089 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:10:40.051303  294089 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-74442dadd84e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:ff:80:da:a9:72} reservation:<nil>}
	I1212 20:10:40.051976  294089 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-26148288ab51 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:82:49:cc:21:29:a7} reservation:<nil>}
	I1212 20:10:40.052677  294089 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3684d3b926aa IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2a:5e:c7:18:99:d2} reservation:<nil>}
	I1212 20:10:40.053470  294089 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e2b100}
	I1212 20:10:40.053499  294089 network_create.go:124] attempt to create docker network newest-cni-832562 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1212 20:10:40.053540  294089 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-832562 newest-cni-832562
	I1212 20:10:40.273000  294089 network_create.go:108] docker network newest-cni-832562 192.168.76.0/24 created
	I1212 20:10:40.273034  294089 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-832562" container
	I1212 20:10:40.273082  294089 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 20:10:40.292532  294089 cli_runner.go:164] Run: docker volume create newest-cni-832562 --label name.minikube.sigs.k8s.io=newest-cni-832562 --label created_by.minikube.sigs.k8s.io=true
	I1212 20:10:40.311252  294089 oci.go:103] Successfully created a docker volume newest-cni-832562
	I1212 20:10:40.311339  294089 cli_runner.go:164] Run: docker run --rm --name newest-cni-832562-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-832562 --entrypoint /usr/bin/test -v newest-cni-832562:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -d /var/lib
	I1212 20:10:41.346201  294089 cli_runner.go:217] Completed: docker run --rm --name newest-cni-832562-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-832562 --entrypoint /usr/bin/test -v newest-cni-832562:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -d /var/lib: (1.034822817s)
	I1212 20:10:41.346227  294089 oci.go:107] Successfully prepared a docker volume newest-cni-832562
	I1212 20:10:41.346298  294089 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 20:10:41.346317  294089 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 20:10:41.346365  294089 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-832562:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 20:10:44.099834  294089 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-832562:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -I lz4 -xf /preloaded.tar -C /extractDir: (2.753414061s)
	I1212 20:10:44.099872  294089 kic.go:203] duration metric: took 2.753549183s to extract preloaded images to volume ...
	W1212 20:10:44.099965  294089 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1212 20:10:44.099999  294089 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1212 20:10:44.100041  294089 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 20:10:44.161254  294089 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-832562 --name newest-cni-832562 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-832562 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-832562 --network newest-cni-832562 --ip 192.168.76.2 --volume newest-cni-832562:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138
	I1212 20:10:44.470232  294089 cli_runner.go:164] Run: docker container inspect newest-cni-832562 --format={{.State.Running}}
	I1212 20:10:44.496900  294089 cli_runner.go:164] Run: docker container inspect newest-cni-832562 --format={{.State.Status}}
	I1212 20:10:44.520446  294089 cli_runner.go:164] Run: docker exec newest-cni-832562 stat /var/lib/dpkg/alternatives/iptables
	I1212 20:10:44.569856  294089 oci.go:144] the created container "newest-cni-832562" has a running status.
	I1212 20:10:44.569910  294089 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22112-5703/.minikube/machines/newest-cni-832562/id_rsa...
	I1212 20:10:44.662011  294089 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22112-5703/.minikube/machines/newest-cni-832562/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 20:10:44.692652  294089 cli_runner.go:164] Run: docker container inspect newest-cni-832562 --format={{.State.Status}}
	I1212 20:10:44.723255  294089 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 20:10:44.723410  294089 kic_runner.go:114] Args: [docker exec --privileged newest-cni-832562 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 20:10:44.779155  294089 cli_runner.go:164] Run: docker container inspect newest-cni-832562 --format={{.State.Status}}
	I1212 20:10:43.069924  289770 out.go:252]   - Generating certificates and keys ...
	I1212 20:10:43.070036  289770 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 20:10:43.070149  289770 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 20:10:43.165645  289770 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 20:10:43.527630  289770 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 20:10:43.661648  289770 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 20:10:43.994295  289770 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 20:10:44.199883  289770 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 20:10:44.200617  289770 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-433034 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1212 20:10:44.355814  289770 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 20:10:44.356007  289770 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-433034 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1212 20:10:44.572900  289770 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 20:10:44.662865  289770 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 20:10:44.891529  289770 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 20:10:44.908510  289770 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 20:10:45.495889  289770 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 20:10:45.547477  289770 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 20:10:45.619498  289770 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 20:10:45.744458  289770 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 20:10:46.017231  289770 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 20:10:46.017903  289770 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 20:10:46.026939  289770 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 20:10:46.028420  289770 out.go:252]   - Booting up control plane ...
	I1212 20:10:46.028551  289770 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 20:10:46.028703  289770 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 20:10:46.029392  289770 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 20:10:46.042736  289770 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:10:46.042879  289770 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 20:10:46.051222  289770 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 20:10:46.051646  289770 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:10:46.051713  289770 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 20:10:41.910828  295304 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1212 20:10:41.911109  295304 start.go:159] libmachine.API.Create for "embed-certs-399565" (driver="docker")
	I1212 20:10:41.911156  295304 client.go:173] LocalClient.Create starting
	I1212 20:10:41.911265  295304 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem
	I1212 20:10:41.911321  295304 main.go:143] libmachine: Decoding PEM data...
	I1212 20:10:41.911348  295304 main.go:143] libmachine: Parsing certificate...
	I1212 20:10:41.911415  295304 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem
	I1212 20:10:41.911446  295304 main.go:143] libmachine: Decoding PEM data...
	I1212 20:10:41.911465  295304 main.go:143] libmachine: Parsing certificate...
	I1212 20:10:41.911882  295304 cli_runner.go:164] Run: docker network inspect embed-certs-399565 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 20:10:41.931002  295304 cli_runner.go:211] docker network inspect embed-certs-399565 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 20:10:41.931085  295304 network_create.go:284] running [docker network inspect embed-certs-399565] to gather additional debugging logs...
	I1212 20:10:41.931108  295304 cli_runner.go:164] Run: docker network inspect embed-certs-399565
	W1212 20:10:41.951793  295304 cli_runner.go:211] docker network inspect embed-certs-399565 returned with exit code 1
	I1212 20:10:41.951830  295304 network_create.go:287] error running [docker network inspect embed-certs-399565]: docker network inspect embed-certs-399565: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-399565 not found
	I1212 20:10:41.951846  295304 network_create.go:289] output of [docker network inspect embed-certs-399565]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-399565 not found
	
	** /stderr **
	I1212 20:10:41.952014  295304 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:10:41.973077  295304 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-74442dadd84e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:ff:80:da:a9:72} reservation:<nil>}
	I1212 20:10:41.973991  295304 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-26148288ab51 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:82:49:cc:21:29:a7} reservation:<nil>}
	I1212 20:10:41.974930  295304 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3684d3b926aa IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2a:5e:c7:18:99:d2} reservation:<nil>}
	I1212 20:10:41.975699  295304 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-5b0e30eb6e7a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:06:34:8e:df:07:77} reservation:<nil>}
	I1212 20:10:41.976338  295304 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-c5f00e9d4498 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:36:57:36:b3:ba:39} reservation:<nil>}
	I1212 20:10:41.977313  295304 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f646a0}
	I1212 20:10:41.977342  295304 network_create.go:124] attempt to create docker network embed-certs-399565 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1212 20:10:41.977400  295304 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-399565 embed-certs-399565
	I1212 20:10:42.030736  295304 network_create.go:108] docker network embed-certs-399565 192.168.94.0/24 created
	I1212 20:10:42.030769  295304 kic.go:121] calculated static IP "192.168.94.2" for the "embed-certs-399565" container
	I1212 20:10:42.030838  295304 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 20:10:42.050291  295304 cli_runner.go:164] Run: docker volume create embed-certs-399565 --label name.minikube.sigs.k8s.io=embed-certs-399565 --label created_by.minikube.sigs.k8s.io=true
	I1212 20:10:42.071788  295304 oci.go:103] Successfully created a docker volume embed-certs-399565
	I1212 20:10:42.071861  295304 cli_runner.go:164] Run: docker run --rm --name embed-certs-399565-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-399565 --entrypoint /usr/bin/test -v embed-certs-399565:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -d /var/lib
	I1212 20:10:44.346604  295304 cli_runner.go:217] Completed: docker run --rm --name embed-certs-399565-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-399565 --entrypoint /usr/bin/test -v embed-certs-399565:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -d /var/lib: (2.274699523s)
	I1212 20:10:44.346637  295304 oci.go:107] Successfully prepared a docker volume embed-certs-399565
	I1212 20:10:44.346715  295304 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:10:44.346730  295304 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 20:10:44.346802  295304 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-399565:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Dec 12 20:10:11 no-preload-753103 crio[568]: time="2025-12-12T20:10:11.388342597Z" level=info msg="Started container" PID=1749 containerID=75edcde58fc96861c44c360a5a6bd38352a266d7d292cae849f7be39b6f191d5 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-s8zkt/dashboard-metrics-scraper id=7d881afc-c710-440f-85f0-503da6cd8667 name=/runtime.v1.RuntimeService/StartContainer sandboxID=464c42ffc45e8bce36830d801cb34bc423ba9b850124801dfcee890dcbdb3c0d
	Dec 12 20:10:11 no-preload-753103 crio[568]: time="2025-12-12T20:10:11.417498141Z" level=info msg="Removing container: ced759efb2e3ef497f4701340f3b3859c5ec17a2c9399385ff9b6b6b14ac5bea" id=d3bc9c6e-fda3-462c-ba0f-31103386154c name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 20:10:11 no-preload-753103 crio[568]: time="2025-12-12T20:10:11.426235725Z" level=info msg="Removed container ced759efb2e3ef497f4701340f3b3859c5ec17a2c9399385ff9b6b6b14ac5bea: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-s8zkt/dashboard-metrics-scraper" id=d3bc9c6e-fda3-462c-ba0f-31103386154c name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 20:10:25 no-preload-753103 crio[568]: time="2025-12-12T20:10:25.448193718Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=5dc28803-36d3-4aa5-a3aa-a6edc58cdc61 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:10:25 no-preload-753103 crio[568]: time="2025-12-12T20:10:25.44909664Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=92422fb3-4baa-43fd-aca2-4d35aaa1802f name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:10:25 no-preload-753103 crio[568]: time="2025-12-12T20:10:25.450154719Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=451ef8c7-1233-4db5-800d-36a25ee08479 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:10:25 no-preload-753103 crio[568]: time="2025-12-12T20:10:25.450321951Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:10:25 no-preload-753103 crio[568]: time="2025-12-12T20:10:25.454951598Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:10:25 no-preload-753103 crio[568]: time="2025-12-12T20:10:25.455149104Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/6f73925e5a554d88bce6cafdf2437ace7e354190f8550acba45466dba0294f17/merged/etc/passwd: no such file or directory"
	Dec 12 20:10:25 no-preload-753103 crio[568]: time="2025-12-12T20:10:25.455187559Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/6f73925e5a554d88bce6cafdf2437ace7e354190f8550acba45466dba0294f17/merged/etc/group: no such file or directory"
	Dec 12 20:10:25 no-preload-753103 crio[568]: time="2025-12-12T20:10:25.455509595Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:10:25 no-preload-753103 crio[568]: time="2025-12-12T20:10:25.486833467Z" level=info msg="Created container a853da2b8ba925006c9ac1b26606e6847abdb752cb7caedb7f1e059755fdab37: kube-system/storage-provisioner/storage-provisioner" id=451ef8c7-1233-4db5-800d-36a25ee08479 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:10:25 no-preload-753103 crio[568]: time="2025-12-12T20:10:25.487466804Z" level=info msg="Starting container: a853da2b8ba925006c9ac1b26606e6847abdb752cb7caedb7f1e059755fdab37" id=b893b40f-431c-4e3b-a6aa-a1799d150c3e name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 20:10:25 no-preload-753103 crio[568]: time="2025-12-12T20:10:25.48949518Z" level=info msg="Started container" PID=1763 containerID=a853da2b8ba925006c9ac1b26606e6847abdb752cb7caedb7f1e059755fdab37 description=kube-system/storage-provisioner/storage-provisioner id=b893b40f-431c-4e3b-a6aa-a1799d150c3e name=/runtime.v1.RuntimeService/StartContainer sandboxID=ec6a25173181706d291b64a15164c93ca81040b5ca46f4c3b71a095ff82184cf
	Dec 12 20:10:34 no-preload-753103 crio[568]: time="2025-12-12T20:10:34.342410238Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=74935609-47e7-4da0-a28e-965afeea54dd name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:10:34 no-preload-753103 crio[568]: time="2025-12-12T20:10:34.361739471Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8966b53e-c2ee-4dd3-952f-d1eeb7d7a0e4 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:10:34 no-preload-753103 crio[568]: time="2025-12-12T20:10:34.362839208Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-s8zkt/dashboard-metrics-scraper" id=980d2a4b-421d-42de-9ef5-12b93c11c877 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:10:34 no-preload-753103 crio[568]: time="2025-12-12T20:10:34.362977501Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:10:34 no-preload-753103 crio[568]: time="2025-12-12T20:10:34.394866791Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:10:34 no-preload-753103 crio[568]: time="2025-12-12T20:10:34.397309559Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:10:34 no-preload-753103 crio[568]: time="2025-12-12T20:10:34.655751102Z" level=info msg="Created container 93292299ed71aee8074161caf32dc608cef3f51f1addaf73da6ffe773de2495f: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-s8zkt/dashboard-metrics-scraper" id=980d2a4b-421d-42de-9ef5-12b93c11c877 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:10:34 no-preload-753103 crio[568]: time="2025-12-12T20:10:34.65656208Z" level=info msg="Starting container: 93292299ed71aee8074161caf32dc608cef3f51f1addaf73da6ffe773de2495f" id=a85c2ecb-9d0f-4151-bbfc-22aa7673af20 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 20:10:34 no-preload-753103 crio[568]: time="2025-12-12T20:10:34.659441901Z" level=info msg="Started container" PID=1799 containerID=93292299ed71aee8074161caf32dc608cef3f51f1addaf73da6ffe773de2495f description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-s8zkt/dashboard-metrics-scraper id=a85c2ecb-9d0f-4151-bbfc-22aa7673af20 name=/runtime.v1.RuntimeService/StartContainer sandboxID=464c42ffc45e8bce36830d801cb34bc423ba9b850124801dfcee890dcbdb3c0d
	Dec 12 20:10:35 no-preload-753103 crio[568]: time="2025-12-12T20:10:35.480500738Z" level=info msg="Removing container: 75edcde58fc96861c44c360a5a6bd38352a266d7d292cae849f7be39b6f191d5" id=36e67727-6b25-4ba8-9281-3364dd1f666e name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 20:10:35 no-preload-753103 crio[568]: time="2025-12-12T20:10:35.494443886Z" level=info msg="Removed container 75edcde58fc96861c44c360a5a6bd38352a266d7d292cae849f7be39b6f191d5: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-s8zkt/dashboard-metrics-scraper" id=36e67727-6b25-4ba8-9281-3364dd1f666e name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	93292299ed71a       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           14 seconds ago      Exited              dashboard-metrics-scraper   3                   464c42ffc45e8       dashboard-metrics-scraper-867fb5f87b-s8zkt   kubernetes-dashboard
	a853da2b8ba92       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         1                   ec6a251731817       storage-provisioner                          kube-system
	163acd72ae860       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   44 seconds ago      Running             kubernetes-dashboard        0                   4b4b2cd38e785       kubernetes-dashboard-b84665fb8-7c9ms         kubernetes-dashboard
	c6a045838db0a       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   02326a1ee4d5c       busybox                                      default
	c0fdb2aafae83       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           54 seconds ago      Running             coredns                     0                   009e1d9861588       coredns-7d764666f9-pbqw6                     kube-system
	1da232930b122       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   a9f756711605f       kindnet-p4b57                                kube-system
	87e1fd79ab4c6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   ec6a251731817       storage-provisioner                          kube-system
	8694fb568f618       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                           54 seconds ago      Running             kube-proxy                  0                   12dbbb22b3094       kube-proxy-xn425                             kube-system
	0de1318190774       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                           57 seconds ago      Running             kube-apiserver              0                   912de1fc8bf49       kube-apiserver-no-preload-753103             kube-system
	452a3991e4df4       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                           57 seconds ago      Running             kube-controller-manager     0                   2db766f70d364       kube-controller-manager-no-preload-753103    kube-system
	3286e3a649780       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           57 seconds ago      Running             etcd                        0                   a145e3ed57b24       etcd-no-preload-753103                       kube-system
	249b72d350355       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                           57 seconds ago      Running             kube-scheduler              0                   3838b923e8187       kube-scheduler-no-preload-753103             kube-system
	
	
	==> coredns [c0fdb2aafae83a7764a44b93a09f4e725a31b95478d233fb8585e31d03e106f5] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:51450 - 62927 "HINFO IN 1495439786791351106.6762009502390536913. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.102620693s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-753103
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-753103
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300
	                    minikube.k8s.io/name=no-preload-753103
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T20_08_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 20:08:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-753103
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 20:10:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 20:10:24 +0000   Fri, 12 Dec 2025 20:08:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 20:10:24 +0000   Fri, 12 Dec 2025 20:08:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 20:10:24 +0000   Fri, 12 Dec 2025 20:08:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 20:10:24 +0000   Fri, 12 Dec 2025 20:09:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-753103
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c9831a2813dcaeb3cc17b596693b7bac
	  System UUID:                f5184786-74a4-443d-967a-ec8e68a8cf1e
	  Boot ID:                    50e046d9-33b0-4107-918f-f0a8d9513b10
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-7d764666f9-pbqw6                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-no-preload-753103                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         117s
	  kube-system                 kindnet-p4b57                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-no-preload-753103              250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-no-preload-753103     200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-xn425                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-no-preload-753103              100m (1%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-s8zkt    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-7c9ms          0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  110s  node-controller  Node no-preload-753103 event: Registered Node no-preload-753103 in Controller
	  Normal  RegisteredNode  53s   node-controller  Node no-preload-753103 event: Registered Node no-preload-753103 in Controller
	
	
	==> dmesg <==
	[  +0.086327] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026088] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.855140] kauditd_printk_skb: 47 callbacks suppressed
	[Dec12 19:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.016395] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023890] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023861] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023912] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023894] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +2.047754] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +4.031589] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +8.448131] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[Dec12 19:32] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[ +32.252714] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	
	
	==> etcd [3286e3a6497804378907ab37416b64a0519732034946847c95152a1d59829cc2] <==
	{"level":"warn","ts":"2025-12-12T20:09:53.118320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:09:53.125660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:09:53.138415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:09:53.145460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:09:53.153061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:09:53.159734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:09:53.166608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:09:53.173325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:09:53.180938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:09:53.188975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:09:53.195571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:09:53.202762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:09:53.209650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:09:53.227112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:09:53.239999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:09:53.246299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:09:53.298364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:34.847637Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"246.756104ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-12T20:10:34.847754Z","caller":"traceutil/trace.go:172","msg":"trace[1936535482] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:683; }","duration":"246.883249ms","start":"2025-12-12T20:10:34.600849Z","end":"2025-12-12T20:10:34.847733Z","steps":["trace[1936535482] 'agreement among raft nodes before linearized reading'  (duration: 55.462421ms)","trace[1936535482] 'range keys from in-memory index tree'  (duration: 191.221053ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-12T20:10:34.848250Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"191.332697ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597681555184421 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/no-preload-753103\" mod_revision:669 > success:<request_put:<key:\"/registry/leases/kube-node-lease/no-preload-753103\" value_size:499 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/no-preload-753103\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-12T20:10:34.848353Z","caller":"traceutil/trace.go:172","msg":"trace[167036480] linearizableReadLoop","detail":"{readStateIndex:722; appliedIndex:721; }","duration":"190.089114ms","start":"2025-12-12T20:10:34.658252Z","end":"2025-12-12T20:10:34.848341Z","steps":["trace[167036480] 'read index received'  (duration: 58.92µs)","trace[167036480] 'applied index is now lower than readState.Index'  (duration: 190.029277ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-12T20:10:34.848393Z","caller":"traceutil/trace.go:172","msg":"trace[1364742130] transaction","detail":"{read_only:false; response_revision:684; number_of_response:1; }","duration":"310.230363ms","start":"2025-12-12T20:10:34.538142Z","end":"2025-12-12T20:10:34.848373Z","steps":["trace[1364742130] 'process raft request'  (duration: 118.195191ms)","trace[1364742130] 'compare'  (duration: 191.22072ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-12T20:10:34.848466Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"190.21368ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-s8zkt.188090be26dc9010\" limit:1 ","response":"range_response_count:1 size:847"}
	{"level":"info","ts":"2025-12-12T20:10:34.848678Z","caller":"traceutil/trace.go:172","msg":"trace[1756200953] range","detail":"{range_begin:/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-s8zkt.188090be26dc9010; range_end:; response_count:1; response_revision:684; }","duration":"190.410619ms","start":"2025-12-12T20:10:34.658242Z","end":"2025-12-12T20:10:34.848653Z","steps":["trace[1756200953] 'agreement among raft nodes before linearized reading'  (duration: 190.133445ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-12T20:10:34.848635Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-12T20:10:34.538114Z","time spent":"310.442323ms","remote":"127.0.0.1:49350","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":557,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/no-preload-753103\" mod_revision:669 > success:<request_put:<key:\"/registry/leases/kube-node-lease/no-preload-753103\" value_size:499 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/no-preload-753103\" > >"}
	
	
	==> kernel <==
	 20:10:49 up 53 min,  0 user,  load average: 3.18, 2.03, 1.50
	Linux no-preload-753103 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1da232930b1225ba26cc77335c5fd77023b588fafd9332d44b052afc26a6740d] <==
	I1212 20:09:54.957391       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1212 20:09:54.957694       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1212 20:09:54.957883       1 main.go:148] setting mtu 1500 for CNI 
	I1212 20:09:54.957908       1 main.go:178] kindnetd IP family: "ipv4"
	I1212 20:09:54.957931       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-12T20:09:55Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1212 20:09:55.161323       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1212 20:09:55.161660       1 controller.go:381] "Waiting for informer caches to sync"
	I1212 20:09:55.161704       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1212 20:09:55.161830       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1212 20:09:55.462321       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1212 20:09:55.462350       1 metrics.go:72] Registering metrics
	I1212 20:09:55.462403       1 controller.go:711] "Syncing nftables rules"
	I1212 20:10:05.162017       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1212 20:10:05.162091       1 main.go:301] handling current node
	I1212 20:10:15.164373       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1212 20:10:15.164411       1 main.go:301] handling current node
	I1212 20:10:25.161521       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1212 20:10:25.161557       1 main.go:301] handling current node
	I1212 20:10:35.164349       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1212 20:10:35.164393       1 main.go:301] handling current node
	I1212 20:10:45.164398       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1212 20:10:45.164434       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0de13181907744cb32a821b90949248f3f382280f37f0ac21d7a4e83b8b9f488] <==
	I1212 20:09:53.772723       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1212 20:09:53.772831       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1212 20:09:53.773143       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1212 20:09:53.773165       1 aggregator.go:187] initial CRD sync complete...
	I1212 20:09:53.773175       1 autoregister_controller.go:144] Starting autoregister controller
	I1212 20:09:53.773180       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 20:09:53.773186       1 cache.go:39] Caches are synced for autoregister controller
	I1212 20:09:53.773339       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1212 20:09:53.779899       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1212 20:09:53.783023       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	E1212 20:09:53.787659       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1212 20:09:53.793344       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 20:09:53.836091       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 20:09:54.059824       1 controller.go:667] quota admission added evaluator for: namespaces
	I1212 20:09:54.085121       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1212 20:09:54.100432       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 20:09:54.105952       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 20:09:54.111538       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1212 20:09:54.140871       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.140.195"}
	I1212 20:09:54.149678       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.155.144"}
	I1212 20:09:54.675749       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1212 20:09:57.426745       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1212 20:09:57.426792       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1212 20:09:57.476828       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 20:09:57.527059       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [452a3991e4df436dd9d2ad0b08c3ffa20c78ded9ad019978d64bd40f23d993a8] <==
	I1212 20:09:56.930525       1 shared_informer.go:377] "Caches are synced"
	I1212 20:09:56.930777       1 shared_informer.go:377] "Caches are synced"
	I1212 20:09:56.929045       1 shared_informer.go:377] "Caches are synced"
	I1212 20:09:56.929982       1 shared_informer.go:377] "Caches are synced"
	I1212 20:09:56.931102       1 shared_informer.go:377] "Caches are synced"
	I1212 20:09:56.931136       1 shared_informer.go:377] "Caches are synced"
	I1212 20:09:56.931202       1 shared_informer.go:377] "Caches are synced"
	I1212 20:09:56.931225       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1212 20:09:56.931337       1 shared_informer.go:377] "Caches are synced"
	I1212 20:09:56.931392       1 shared_informer.go:377] "Caches are synced"
	I1212 20:09:56.931338       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-753103"
	I1212 20:09:56.931677       1 shared_informer.go:377] "Caches are synced"
	I1212 20:09:56.931677       1 shared_informer.go:377] "Caches are synced"
	I1212 20:09:56.931795       1 range_allocator.go:177] "Sending events to api server"
	I1212 20:09:56.931836       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1212 20:09:56.931845       1 shared_informer.go:370] "Waiting for caches to sync"
	I1212 20:09:56.931677       1 shared_informer.go:377] "Caches are synced"
	I1212 20:09:56.931677       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1212 20:09:56.931851       1 shared_informer.go:377] "Caches are synced"
	I1212 20:09:56.937055       1 shared_informer.go:370] "Waiting for caches to sync"
	I1212 20:09:56.941598       1 shared_informer.go:377] "Caches are synced"
	I1212 20:09:57.031482       1 shared_informer.go:377] "Caches are synced"
	I1212 20:09:57.031503       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1212 20:09:57.031509       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1212 20:09:57.038149       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [8694fb568f6184f280ef0979168c88307d2d2ce6abadf548201dab5907b1dec2] <==
	I1212 20:09:54.753063       1 server_linux.go:53] "Using iptables proxy"
	I1212 20:09:54.834368       1 shared_informer.go:370] "Waiting for caches to sync"
	I1212 20:09:54.934728       1 shared_informer.go:377] "Caches are synced"
	I1212 20:09:54.934760       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1212 20:09:54.934844       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 20:09:54.952754       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 20:09:54.952796       1 server_linux.go:136] "Using iptables Proxier"
	I1212 20:09:54.957539       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 20:09:54.957935       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1212 20:09:54.957955       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 20:09:54.959204       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 20:09:54.959228       1 config.go:200] "Starting service config controller"
	I1212 20:09:54.959241       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 20:09:54.959233       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 20:09:54.959289       1 config.go:106] "Starting endpoint slice config controller"
	I1212 20:09:54.959297       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 20:09:54.959308       1 config.go:309] "Starting node config controller"
	I1212 20:09:54.959316       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 20:09:54.959324       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 20:09:55.060340       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1212 20:09:55.060350       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1212 20:09:55.060390       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [249b72d350355577980958226dcfac379cd22975003283e5e7acd74458648cfc] <==
	I1212 20:09:52.012018       1 serving.go:386] Generated self-signed cert in-memory
	W1212 20:09:53.691459       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1212 20:09:53.691498       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 20:09:53.691511       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1212 20:09:53.691521       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1212 20:09:53.768096       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1212 20:09:53.768184       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 20:09:53.772256       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 20:09:53.772339       1 shared_informer.go:370] "Waiting for caches to sync"
	I1212 20:09:53.772891       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1212 20:09:53.772980       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1212 20:09:53.873379       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 12 20:10:10 no-preload-753103 kubelet[722]: E1212 20:10:10.034408     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-s8zkt_kubernetes-dashboard(f397e482-3c62-4935-9fa7-ad93318eb694)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-s8zkt" podUID="f397e482-3c62-4935-9fa7-ad93318eb694"
	Dec 12 20:10:11 no-preload-753103 kubelet[722]: E1212 20:10:11.342541     722 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-s8zkt" containerName="dashboard-metrics-scraper"
	Dec 12 20:10:11 no-preload-753103 kubelet[722]: I1212 20:10:11.342582     722 scope.go:122] "RemoveContainer" containerID="ced759efb2e3ef497f4701340f3b3859c5ec17a2c9399385ff9b6b6b14ac5bea"
	Dec 12 20:10:11 no-preload-753103 kubelet[722]: I1212 20:10:11.416243     722 scope.go:122] "RemoveContainer" containerID="ced759efb2e3ef497f4701340f3b3859c5ec17a2c9399385ff9b6b6b14ac5bea"
	Dec 12 20:10:11 no-preload-753103 kubelet[722]: E1212 20:10:11.416467     722 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-s8zkt" containerName="dashboard-metrics-scraper"
	Dec 12 20:10:11 no-preload-753103 kubelet[722]: I1212 20:10:11.416505     722 scope.go:122] "RemoveContainer" containerID="75edcde58fc96861c44c360a5a6bd38352a266d7d292cae849f7be39b6f191d5"
	Dec 12 20:10:11 no-preload-753103 kubelet[722]: E1212 20:10:11.416700     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-s8zkt_kubernetes-dashboard(f397e482-3c62-4935-9fa7-ad93318eb694)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-s8zkt" podUID="f397e482-3c62-4935-9fa7-ad93318eb694"
	Dec 12 20:10:20 no-preload-753103 kubelet[722]: E1212 20:10:20.034025     722 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-s8zkt" containerName="dashboard-metrics-scraper"
	Dec 12 20:10:20 no-preload-753103 kubelet[722]: I1212 20:10:20.034061     722 scope.go:122] "RemoveContainer" containerID="75edcde58fc96861c44c360a5a6bd38352a266d7d292cae849f7be39b6f191d5"
	Dec 12 20:10:20 no-preload-753103 kubelet[722]: E1212 20:10:20.034238     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-s8zkt_kubernetes-dashboard(f397e482-3c62-4935-9fa7-ad93318eb694)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-s8zkt" podUID="f397e482-3c62-4935-9fa7-ad93318eb694"
	Dec 12 20:10:25 no-preload-753103 kubelet[722]: I1212 20:10:25.447727     722 scope.go:122] "RemoveContainer" containerID="87e1fd79ab4c6e57e3cb839d5f0fa3669a8136cd2e22b70a224ec70cb69bc6d0"
	Dec 12 20:10:31 no-preload-753103 kubelet[722]: E1212 20:10:31.326382     722 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-pbqw6" containerName="coredns"
	Dec 12 20:10:34 no-preload-753103 kubelet[722]: E1212 20:10:34.341835     722 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-s8zkt" containerName="dashboard-metrics-scraper"
	Dec 12 20:10:34 no-preload-753103 kubelet[722]: I1212 20:10:34.341871     722 scope.go:122] "RemoveContainer" containerID="75edcde58fc96861c44c360a5a6bd38352a266d7d292cae849f7be39b6f191d5"
	Dec 12 20:10:35 no-preload-753103 kubelet[722]: I1212 20:10:35.478171     722 scope.go:122] "RemoveContainer" containerID="75edcde58fc96861c44c360a5a6bd38352a266d7d292cae849f7be39b6f191d5"
	Dec 12 20:10:35 no-preload-753103 kubelet[722]: E1212 20:10:35.478587     722 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-s8zkt" containerName="dashboard-metrics-scraper"
	Dec 12 20:10:35 no-preload-753103 kubelet[722]: I1212 20:10:35.478611     722 scope.go:122] "RemoveContainer" containerID="93292299ed71aee8074161caf32dc608cef3f51f1addaf73da6ffe773de2495f"
	Dec 12 20:10:35 no-preload-753103 kubelet[722]: E1212 20:10:35.478785     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-s8zkt_kubernetes-dashboard(f397e482-3c62-4935-9fa7-ad93318eb694)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-s8zkt" podUID="f397e482-3c62-4935-9fa7-ad93318eb694"
	Dec 12 20:10:40 no-preload-753103 kubelet[722]: E1212 20:10:40.033902     722 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-s8zkt" containerName="dashboard-metrics-scraper"
	Dec 12 20:10:40 no-preload-753103 kubelet[722]: I1212 20:10:40.033949     722 scope.go:122] "RemoveContainer" containerID="93292299ed71aee8074161caf32dc608cef3f51f1addaf73da6ffe773de2495f"
	Dec 12 20:10:40 no-preload-753103 kubelet[722]: E1212 20:10:40.034146     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-s8zkt_kubernetes-dashboard(f397e482-3c62-4935-9fa7-ad93318eb694)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-s8zkt" podUID="f397e482-3c62-4935-9fa7-ad93318eb694"
	Dec 12 20:10:45 no-preload-753103 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 12 20:10:45 no-preload-753103 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 12 20:10:45 no-preload-753103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:10:45 no-preload-753103 systemd[1]: kubelet.service: Consumed 1.665s CPU time.
	
	
	==> kubernetes-dashboard [163acd72ae86023a3eae1b09074158d0b11755431dd837cc567bffd051dfb67d] <==
	2025/12/12 20:10:04 Using namespace: kubernetes-dashboard
	2025/12/12 20:10:04 Using in-cluster config to connect to apiserver
	2025/12/12 20:10:04 Using secret token for csrf signing
	2025/12/12 20:10:04 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/12 20:10:04 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/12 20:10:04 Successful initial request to the apiserver, version: v1.35.0-beta.0
	2025/12/12 20:10:04 Generating JWE encryption key
	2025/12/12 20:10:04 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/12 20:10:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/12 20:10:04 Initializing JWE encryption key from synchronized object
	2025/12/12 20:10:04 Creating in-cluster Sidecar client
	2025/12/12 20:10:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/12 20:10:04 Serving insecurely on HTTP port: 9090
	2025/12/12 20:10:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/12 20:10:04 Starting overwatch
	
	
	==> storage-provisioner [87e1fd79ab4c6e57e3cb839d5f0fa3669a8136cd2e22b70a224ec70cb69bc6d0] <==
	I1212 20:09:54.721076       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1212 20:10:24.727862       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a853da2b8ba925006c9ac1b26606e6847abdb752cb7caedb7f1e059755fdab37] <==
	I1212 20:10:25.503254       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 20:10:25.511240       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 20:10:25.511299       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1212 20:10:25.513253       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:10:28.968003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:10:33.228529       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:10:36.827374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:10:39.881237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:10:42.903468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:10:42.935944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1212 20:10:42.936120       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 20:10:42.936224       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0a4a2be5-48cb-4e82-81d6-ec5f27edd4fa", APIVersion:"v1", ResourceVersion:"692", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-753103_a2b4372c-558e-41cd-8913-ee3477ff0de7 became leader
	I1212 20:10:42.936307       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-753103_a2b4372c-558e-41cd-8913-ee3477ff0de7!
	W1212 20:10:42.938406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:10:43.006607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1212 20:10:43.036967       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-753103_a2b4372c-558e-41cd-8913-ee3477ff0de7!
	W1212 20:10:45.037053       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:10:45.131349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:10:47.134711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:10:47.172705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:10:49.178774       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:10:49.186587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-753103 -n no-preload-753103
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-753103 -n no-preload-753103: exit status 2 (358.211925ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-753103 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-753103
helpers_test.go:244: (dbg) docker inspect no-preload-753103:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "452e89832e40aed0dcc87fb5c8d33b609854c2becd1774e2a09d3c6d345e07dd",
	        "Created": "2025-12-12T20:08:31.941720816Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 281502,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T20:09:45.308654013Z",
	            "FinishedAt": "2025-12-12T20:09:44.412072699Z"
	        },
	        "Image": "sha256:1ca69fb46d667873e297ff8975852d6be818eb624529e750e2b09ff0d3b0c367",
	        "ResolvConfPath": "/var/lib/docker/containers/452e89832e40aed0dcc87fb5c8d33b609854c2becd1774e2a09d3c6d345e07dd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/452e89832e40aed0dcc87fb5c8d33b609854c2becd1774e2a09d3c6d345e07dd/hostname",
	        "HostsPath": "/var/lib/docker/containers/452e89832e40aed0dcc87fb5c8d33b609854c2becd1774e2a09d3c6d345e07dd/hosts",
	        "LogPath": "/var/lib/docker/containers/452e89832e40aed0dcc87fb5c8d33b609854c2becd1774e2a09d3c6d345e07dd/452e89832e40aed0dcc87fb5c8d33b609854c2becd1774e2a09d3c6d345e07dd-json.log",
	        "Name": "/no-preload-753103",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-753103:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-753103",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "452e89832e40aed0dcc87fb5c8d33b609854c2becd1774e2a09d3c6d345e07dd",
	                "LowerDir": "/var/lib/docker/overlay2/520211cd2383e798b47ab216c7c60903d51535a3971d5244a2d0383f153e65e5-init/diff:/var/lib/docker/overlay2/87d8f1d6be832394c20ec55eb084b3f4a084ca786856584bd9e90cdb45432d3c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/520211cd2383e798b47ab216c7c60903d51535a3971d5244a2d0383f153e65e5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/520211cd2383e798b47ab216c7c60903d51535a3971d5244a2d0383f153e65e5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/520211cd2383e798b47ab216c7c60903d51535a3971d5244a2d0383f153e65e5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-753103",
	                "Source": "/var/lib/docker/volumes/no-preload-753103/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-753103",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-753103",
	                "name.minikube.sigs.k8s.io": "no-preload-753103",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "6b77079e574a8811cf1119bfd423c70cef66dd83914d25dba4759248caed172d",
	            "SandboxKey": "/var/run/docker/netns/6b77079e574a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-753103": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c5f00e9d4498c5a5c29031e27e31d73fb062b781edd69002a7dac693e0d7a335",
	                    "EndpointID": "0c7e13672c05dae95b8886235385614ba264bf49fe3354f74a09acbe06a644f3",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "ce:6b:03:8d:e9:d5",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-753103",
	                        "452e89832e40"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-753103 -n no-preload-753103
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-753103 -n no-preload-753103: exit status 2 (388.344539ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-753103 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-753103 logs -n 25: (1.130688269s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ start   │ -p cert-expiration-070436 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                            │ cert-expiration-070436       │ jenkins │ v1.37.0 │ 12 Dec 25 20:08 UTC │ 12 Dec 25 20:08 UTC │
	│ delete  │ -p cert-expiration-070436                                                                                                                                                                                                                            │ cert-expiration-070436       │ jenkins │ v1.37.0 │ 12 Dec 25 20:08 UTC │ 12 Dec 25 20:08 UTC │
	│ start   │ -p no-preload-753103 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:08 UTC │ 12 Dec 25 20:09 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-824670 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │                     │
	│ stop    │ -p old-k8s-version-824670 --alsologtostderr -v=3                                                                                                                                                                                                     │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:09 UTC │
	│ addons  │ enable metrics-server -p no-preload-753103 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │                     │
	│ stop    │ -p no-preload-753103 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:09 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-824670 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:09 UTC │
	│ start   │ -p old-k8s-version-824670 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:10 UTC │
	│ addons  │ enable dashboard -p no-preload-753103 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:09 UTC │
	│ start   │ -p no-preload-753103 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:10 UTC │
	│ delete  │ -p stopped-upgrade-180826                                                                                                                                                                                                                            │ stopped-upgrade-180826       │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ start   │ -p kubernetes-upgrade-991615 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                    │ kubernetes-upgrade-991615    │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ start   │ -p kubernetes-upgrade-991615 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-991615    │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ start   │ -p default-k8s-diff-port-433034 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-433034 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ image   │ old-k8s-version-824670 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ pause   │ -p old-k8s-version-824670 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-991615                                                                                                                                                                                                                         │ kubernetes-upgrade-991615    │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ delete  │ -p old-k8s-version-824670                                                                                                                                                                                                                            │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ start   │ -p newest-cni-832562 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-832562            │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ delete  │ -p old-k8s-version-824670                                                                                                                                                                                                                            │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ delete  │ -p disable-driver-mounts-044739                                                                                                                                                                                                                      │ disable-driver-mounts-044739 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ start   │ -p embed-certs-399565 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-399565           │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ image   │ no-preload-753103 image list --format=json                                                                                                                                                                                                           │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ pause   │ -p no-preload-753103 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:10:41
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:10:41.692529  295304 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:10:41.692832  295304 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:10:41.692845  295304 out.go:374] Setting ErrFile to fd 2...
	I1212 20:10:41.692853  295304 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:10:41.693166  295304 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 20:10:41.693721  295304 out.go:368] Setting JSON to false
	I1212 20:10:41.694958  295304 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3189,"bootTime":1765567053,"procs":280,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 20:10:41.695027  295304 start.go:143] virtualization: kvm guest
	I1212 20:10:41.697036  295304 out.go:179] * [embed-certs-399565] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 20:10:41.698462  295304 notify.go:221] Checking for updates...
	I1212 20:10:41.698482  295304 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:10:41.699614  295304 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:10:41.701198  295304 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 20:10:41.702501  295304 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-5703/.minikube
	I1212 20:10:41.703721  295304 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 20:10:41.706914  295304 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:10:41.708655  295304 config.go:182] Loaded profile config "default-k8s-diff-port-433034": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:10:41.708809  295304 config.go:182] Loaded profile config "newest-cni-832562": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:10:41.708941  295304 config.go:182] Loaded profile config "no-preload-753103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:10:41.709162  295304 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:10:41.736375  295304 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1212 20:10:41.736489  295304 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:10:41.805184  295304 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:82 SystemTime:2025-12-12 20:10:41.794085924 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 20:10:41.805291  295304 docker.go:319] overlay module found
	I1212 20:10:41.806950  295304 out.go:179] * Using the docker driver based on user configuration
	I1212 20:10:41.808086  295304 start.go:309] selected driver: docker
	I1212 20:10:41.808117  295304 start.go:927] validating driver "docker" against <nil>
	I1212 20:10:41.808130  295304 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:10:41.808903  295304 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:10:41.879218  295304 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:82 SystemTime:2025-12-12 20:10:41.868966394 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 20:10:41.879408  295304 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 20:10:41.879609  295304 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 20:10:41.881378  295304 out.go:179] * Using Docker driver with root privileges
	I1212 20:10:41.882511  295304 cni.go:84] Creating CNI manager for ""
	I1212 20:10:41.882610  295304 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:10:41.882625  295304 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 20:10:41.882687  295304 start.go:353] cluster config:
	{Name:embed-certs-399565 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-399565 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:10:41.884011  295304 out.go:179] * Starting "embed-certs-399565" primary control-plane node in "embed-certs-399565" cluster
	I1212 20:10:41.884959  295304 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 20:10:41.886157  295304 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 20:10:41.887252  295304 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:10:41.887293  295304 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1212 20:10:41.887304  295304 cache.go:65] Caching tarball of preloaded images
	I1212 20:10:41.887360  295304 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 20:10:41.887384  295304 preload.go:238] Found /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 20:10:41.887394  295304 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 20:10:41.887495  295304 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/embed-certs-399565/config.json ...
	I1212 20:10:41.887520  295304 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/embed-certs-399565/config.json: {Name:mk930d7a15f7dbca00bf49663208fb3e1c8a9b63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:41.908425  295304 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 20:10:41.908454  295304 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 20:10:41.908474  295304 cache.go:243] Successfully downloaded all kic artifacts
	I1212 20:10:41.908507  295304 start.go:360] acquireMachinesLock for embed-certs-399565: {Name:mk1cab5bf8b327e3a1e1090095b68f2974d5f79b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:10:41.908618  295304 start.go:364] duration metric: took 89.092µs to acquireMachinesLock for "embed-certs-399565"
	I1212 20:10:41.908647  295304 start.go:93] Provisioning new machine with config: &{Name:embed-certs-399565 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-399565 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:10:41.908725  295304 start.go:125] createHost starting for "" (driver="docker")
	I1212 20:10:41.571612  289770 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-433034 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:10:41.591935  289770 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1212 20:10:41.596423  289770 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:10:41.609968  289770 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-433034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-433034 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 20:10:41.610111  289770 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:10:41.610194  289770 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:10:41.647065  289770 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:10:41.647086  289770 crio.go:433] Images already preloaded, skipping extraction
	I1212 20:10:41.647127  289770 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:10:41.674477  289770 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:10:41.674503  289770 cache_images.go:86] Images are preloaded, skipping loading
	I1212 20:10:41.674512  289770 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.2 crio true true} ...
	I1212 20:10:41.674617  289770 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-433034 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-433034 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 20:10:41.674696  289770 ssh_runner.go:195] Run: crio config
	I1212 20:10:41.727974  289770 cni.go:84] Creating CNI manager for ""
	I1212 20:10:41.728000  289770 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:10:41.728020  289770 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 20:10:41.728050  289770 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-433034 NodeName:default-k8s-diff-port-433034 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:10:41.728223  289770 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-433034"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:10:41.728304  289770 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 20:10:41.736985  289770 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 20:10:41.737042  289770 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 20:10:41.745683  289770 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1212 20:10:41.760522  289770 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 20:10:41.782567  289770 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1212 20:10:41.799056  289770 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1212 20:10:41.803567  289770 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:10:41.815701  289770 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:10:41.922909  289770 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:10:41.950098  289770 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034 for IP: 192.168.103.2
	I1212 20:10:41.950121  289770 certs.go:195] generating shared ca certs ...
	I1212 20:10:41.950143  289770 certs.go:227] acquiring lock for ca certs: {Name:mk2b2b547b64ae18c808f2dedd3d3cb8fa4a59ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:41.950324  289770 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-5703/.minikube/ca.key
	I1212 20:10:41.950391  289770 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.key
	I1212 20:10:41.950406  289770 certs.go:257] generating profile certs ...
	I1212 20:10:41.950459  289770 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/client.key
	I1212 20:10:41.950479  289770 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/client.crt with IP's: []
	I1212 20:10:41.978857  289770 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/client.crt ...
	I1212 20:10:41.978887  289770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/client.crt: {Name:mk2bf73a6340ccd36d94f4e2152bd3802b73c6fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:41.979052  289770 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/client.key ...
	I1212 20:10:41.979072  289770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/client.key: {Name:mk06acdffbf93d44bcb7e25a2047d506206b8423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:41.979193  289770 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/apiserver.key.65d27d78
	I1212 20:10:41.979217  289770 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/apiserver.crt.65d27d78 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1212 20:10:42.069552  289770 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/apiserver.crt.65d27d78 ...
	I1212 20:10:42.069585  289770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/apiserver.crt.65d27d78: {Name:mk954e68adcec24c91161e9771f465062205d1d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:42.069819  289770 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/apiserver.key.65d27d78 ...
	I1212 20:10:42.069843  289770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/apiserver.key.65d27d78: {Name:mk67a622c1f12d08eedf35004a6ad825a0644108 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:42.069966  289770 certs.go:382] copying /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/apiserver.crt.65d27d78 -> /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/apiserver.crt
	I1212 20:10:42.070080  289770 certs.go:386] copying /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/apiserver.key.65d27d78 -> /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/apiserver.key
	I1212 20:10:42.070170  289770 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/proxy-client.key
	I1212 20:10:42.070188  289770 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/proxy-client.crt with IP's: []
	I1212 20:10:42.314746  289770 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/proxy-client.crt ...
	I1212 20:10:42.314777  289770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/proxy-client.crt: {Name:mk94d580659e7189bf2baf23d2f8504f8aef4985 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:42.374361  289770 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/proxy-client.key ...
	I1212 20:10:42.374391  289770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/proxy-client.key: {Name:mkb5065b604b0aa12be72c40f05b38b18f8204b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:42.374653  289770 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/9254.pem (1338 bytes)
	W1212 20:10:42.374703  289770 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-5703/.minikube/certs/9254_empty.pem, impossibly tiny 0 bytes
	I1212 20:10:42.374720  289770 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 20:10:42.374747  289770 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem (1078 bytes)
	I1212 20:10:42.374774  289770 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:10:42.374809  289770 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/key.pem (1679 bytes)
	I1212 20:10:42.374869  289770 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/ssl/certs/92542.pem (1708 bytes)
	I1212 20:10:42.375596  289770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:10:42.396082  289770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 20:10:42.413213  289770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:10:42.430418  289770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 20:10:42.447205  289770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1212 20:10:42.463675  289770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I1212 20:10:42.481534  289770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:10:42.498588  289770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/default-k8s-diff-port-433034/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 20:10:42.515082  289770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:10:42.540720  289770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/certs/9254.pem --> /usr/share/ca-certificates/9254.pem (1338 bytes)
	I1212 20:10:42.558289  289770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/ssl/certs/92542.pem --> /usr/share/ca-certificates/92542.pem (1708 bytes)
	I1212 20:10:42.574960  289770 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 20:10:42.587452  289770 ssh_runner.go:195] Run: openssl version
	I1212 20:10:42.593439  289770 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:10:42.600668  289770 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 20:10:42.608626  289770 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:10:42.612218  289770 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:10:42.612268  289770 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:10:42.646424  289770 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 20:10:42.654255  289770 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 20:10:42.661698  289770 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9254.pem
	I1212 20:10:42.669000  289770 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9254.pem /etc/ssl/certs/9254.pem
	I1212 20:10:42.678662  289770 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9254.pem
	I1212 20:10:42.682370  289770 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:38 /usr/share/ca-certificates/9254.pem
	I1212 20:10:42.682433  289770 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9254.pem
	I1212 20:10:42.717398  289770 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 20:10:42.725365  289770 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9254.pem /etc/ssl/certs/51391683.0
	I1212 20:10:42.732780  289770 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/92542.pem
	I1212 20:10:42.740098  289770 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/92542.pem /etc/ssl/certs/92542.pem
	I1212 20:10:42.747566  289770 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92542.pem
	I1212 20:10:42.751169  289770 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:38 /usr/share/ca-certificates/92542.pem
	I1212 20:10:42.751221  289770 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92542.pem
	I1212 20:10:42.794427  289770 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 20:10:42.803760  289770 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/92542.pem /etc/ssl/certs/3ec20f2e.0
	I1212 20:10:42.812167  289770 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:10:42.816546  289770 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 20:10:42.816607  289770 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-433034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-433034 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:10:42.816672  289770 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:10:42.816714  289770 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:10:42.844343  289770 cri.go:89] found id: ""
	I1212 20:10:42.844421  289770 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 20:10:42.852381  289770 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 20:10:42.859963  289770 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 20:10:42.860006  289770 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:10:42.867465  289770 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 20:10:42.867487  289770 kubeadm.go:158] found existing configuration files:
	
	I1212 20:10:42.867531  289770 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1212 20:10:42.874804  289770 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 20:10:42.874861  289770 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 20:10:42.881738  289770 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1212 20:10:42.888789  289770 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 20:10:42.888834  289770 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:10:42.895637  289770 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1212 20:10:42.903340  289770 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 20:10:42.903390  289770 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:10:42.912496  289770 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1212 20:10:42.920011  289770 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 20:10:42.920060  289770 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:10:42.927216  289770 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 20:10:42.971358  289770 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1212 20:10:42.971451  289770 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 20:10:42.991280  289770 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 20:10:42.991382  289770 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1212 20:10:42.991443  289770 kubeadm.go:319] OS: Linux
	I1212 20:10:42.991528  289770 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 20:10:42.991593  289770 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 20:10:42.991663  289770 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 20:10:42.991730  289770 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 20:10:42.991809  289770 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 20:10:42.991878  289770 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 20:10:42.991933  289770 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 20:10:42.992013  289770 kubeadm.go:319] CGROUPS_IO: enabled
	I1212 20:10:43.054439  289770 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 20:10:43.054589  289770 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 20:10:43.054748  289770 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 20:10:43.062502  289770 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 20:10:39.992548  294089 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1212 20:10:39.992827  294089 start.go:159] libmachine.API.Create for "newest-cni-832562" (driver="docker")
	I1212 20:10:39.992864  294089 client.go:173] LocalClient.Create starting
	I1212 20:10:39.992942  294089 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem
	I1212 20:10:39.992996  294089 main.go:143] libmachine: Decoding PEM data...
	I1212 20:10:39.993023  294089 main.go:143] libmachine: Parsing certificate...
	I1212 20:10:39.993116  294089 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem
	I1212 20:10:39.993153  294089 main.go:143] libmachine: Decoding PEM data...
	I1212 20:10:39.993167  294089 main.go:143] libmachine: Parsing certificate...
	I1212 20:10:39.993579  294089 cli_runner.go:164] Run: docker network inspect newest-cni-832562 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 20:10:40.012789  294089 cli_runner.go:211] docker network inspect newest-cni-832562 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 20:10:40.012882  294089 network_create.go:284] running [docker network inspect newest-cni-832562] to gather additional debugging logs...
	I1212 20:10:40.012910  294089 cli_runner.go:164] Run: docker network inspect newest-cni-832562
	W1212 20:10:40.030825  294089 cli_runner.go:211] docker network inspect newest-cni-832562 returned with exit code 1
	I1212 20:10:40.030857  294089 network_create.go:287] error running [docker network inspect newest-cni-832562]: docker network inspect newest-cni-832562: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-832562 not found
	I1212 20:10:40.030871  294089 network_create.go:289] output of [docker network inspect newest-cni-832562]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-832562 not found
	
	** /stderr **
	I1212 20:10:40.031082  294089 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:10:40.051303  294089 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-74442dadd84e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:ff:80:da:a9:72} reservation:<nil>}
	I1212 20:10:40.051976  294089 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-26148288ab51 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:82:49:cc:21:29:a7} reservation:<nil>}
	I1212 20:10:40.052677  294089 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3684d3b926aa IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2a:5e:c7:18:99:d2} reservation:<nil>}
	I1212 20:10:40.053470  294089 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e2b100}
	I1212 20:10:40.053499  294089 network_create.go:124] attempt to create docker network newest-cni-832562 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1212 20:10:40.053540  294089 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-832562 newest-cni-832562
	I1212 20:10:40.273000  294089 network_create.go:108] docker network newest-cni-832562 192.168.76.0/24 created
	I1212 20:10:40.273034  294089 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-832562" container
	I1212 20:10:40.273082  294089 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 20:10:40.292532  294089 cli_runner.go:164] Run: docker volume create newest-cni-832562 --label name.minikube.sigs.k8s.io=newest-cni-832562 --label created_by.minikube.sigs.k8s.io=true
	I1212 20:10:40.311252  294089 oci.go:103] Successfully created a docker volume newest-cni-832562
	I1212 20:10:40.311339  294089 cli_runner.go:164] Run: docker run --rm --name newest-cni-832562-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-832562 --entrypoint /usr/bin/test -v newest-cni-832562:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -d /var/lib
	I1212 20:10:41.346201  294089 cli_runner.go:217] Completed: docker run --rm --name newest-cni-832562-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-832562 --entrypoint /usr/bin/test -v newest-cni-832562:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -d /var/lib: (1.034822817s)
	I1212 20:10:41.346227  294089 oci.go:107] Successfully prepared a docker volume newest-cni-832562
	I1212 20:10:41.346298  294089 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 20:10:41.346317  294089 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 20:10:41.346365  294089 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-832562:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 20:10:44.099834  294089 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-832562:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -I lz4 -xf /preloaded.tar -C /extractDir: (2.753414061s)
	I1212 20:10:44.099872  294089 kic.go:203] duration metric: took 2.753549183s to extract preloaded images to volume ...
	W1212 20:10:44.099965  294089 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1212 20:10:44.099999  294089 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1212 20:10:44.100041  294089 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 20:10:44.161254  294089 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-832562 --name newest-cni-832562 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-832562 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-832562 --network newest-cni-832562 --ip 192.168.76.2 --volume newest-cni-832562:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138
	I1212 20:10:44.470232  294089 cli_runner.go:164] Run: docker container inspect newest-cni-832562 --format={{.State.Running}}
	I1212 20:10:44.496900  294089 cli_runner.go:164] Run: docker container inspect newest-cni-832562 --format={{.State.Status}}
	I1212 20:10:44.520446  294089 cli_runner.go:164] Run: docker exec newest-cni-832562 stat /var/lib/dpkg/alternatives/iptables
	I1212 20:10:44.569856  294089 oci.go:144] the created container "newest-cni-832562" has a running status.
	I1212 20:10:44.569910  294089 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22112-5703/.minikube/machines/newest-cni-832562/id_rsa...
	I1212 20:10:44.662011  294089 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22112-5703/.minikube/machines/newest-cni-832562/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 20:10:44.692652  294089 cli_runner.go:164] Run: docker container inspect newest-cni-832562 --format={{.State.Status}}
	I1212 20:10:44.723255  294089 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 20:10:44.723410  294089 kic_runner.go:114] Args: [docker exec --privileged newest-cni-832562 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 20:10:44.779155  294089 cli_runner.go:164] Run: docker container inspect newest-cni-832562 --format={{.State.Status}}
	I1212 20:10:43.069924  289770 out.go:252]   - Generating certificates and keys ...
	I1212 20:10:43.070036  289770 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 20:10:43.070149  289770 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 20:10:43.165645  289770 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 20:10:43.527630  289770 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 20:10:43.661648  289770 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 20:10:43.994295  289770 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 20:10:44.199883  289770 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 20:10:44.200617  289770 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-433034 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1212 20:10:44.355814  289770 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 20:10:44.356007  289770 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-433034 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1212 20:10:44.572900  289770 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 20:10:44.662865  289770 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 20:10:44.891529  289770 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 20:10:44.908510  289770 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 20:10:45.495889  289770 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 20:10:45.547477  289770 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 20:10:45.619498  289770 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 20:10:45.744458  289770 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 20:10:46.017231  289770 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 20:10:46.017903  289770 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 20:10:46.026939  289770 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 20:10:46.028420  289770 out.go:252]   - Booting up control plane ...
	I1212 20:10:46.028551  289770 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 20:10:46.028703  289770 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 20:10:46.029392  289770 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 20:10:46.042736  289770 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:10:46.042879  289770 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 20:10:46.051222  289770 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 20:10:46.051646  289770 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:10:46.051713  289770 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 20:10:41.910828  295304 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1212 20:10:41.911109  295304 start.go:159] libmachine.API.Create for "embed-certs-399565" (driver="docker")
	I1212 20:10:41.911156  295304 client.go:173] LocalClient.Create starting
	I1212 20:10:41.911265  295304 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem
	I1212 20:10:41.911321  295304 main.go:143] libmachine: Decoding PEM data...
	I1212 20:10:41.911348  295304 main.go:143] libmachine: Parsing certificate...
	I1212 20:10:41.911415  295304 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem
	I1212 20:10:41.911446  295304 main.go:143] libmachine: Decoding PEM data...
	I1212 20:10:41.911465  295304 main.go:143] libmachine: Parsing certificate...
	I1212 20:10:41.911882  295304 cli_runner.go:164] Run: docker network inspect embed-certs-399565 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 20:10:41.931002  295304 cli_runner.go:211] docker network inspect embed-certs-399565 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 20:10:41.931085  295304 network_create.go:284] running [docker network inspect embed-certs-399565] to gather additional debugging logs...
	I1212 20:10:41.931108  295304 cli_runner.go:164] Run: docker network inspect embed-certs-399565
	W1212 20:10:41.951793  295304 cli_runner.go:211] docker network inspect embed-certs-399565 returned with exit code 1
	I1212 20:10:41.951830  295304 network_create.go:287] error running [docker network inspect embed-certs-399565]: docker network inspect embed-certs-399565: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-399565 not found
	I1212 20:10:41.951846  295304 network_create.go:289] output of [docker network inspect embed-certs-399565]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-399565 not found
	
	** /stderr **
	I1212 20:10:41.952014  295304 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:10:41.973077  295304 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-74442dadd84e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:ff:80:da:a9:72} reservation:<nil>}
	I1212 20:10:41.973991  295304 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-26148288ab51 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:82:49:cc:21:29:a7} reservation:<nil>}
	I1212 20:10:41.974930  295304 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3684d3b926aa IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2a:5e:c7:18:99:d2} reservation:<nil>}
	I1212 20:10:41.975699  295304 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-5b0e30eb6e7a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:06:34:8e:df:07:77} reservation:<nil>}
	I1212 20:10:41.976338  295304 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-c5f00e9d4498 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:36:57:36:b3:ba:39} reservation:<nil>}
	I1212 20:10:41.977313  295304 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f646a0}
	I1212 20:10:41.977342  295304 network_create.go:124] attempt to create docker network embed-certs-399565 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1212 20:10:41.977400  295304 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-399565 embed-certs-399565
	I1212 20:10:42.030736  295304 network_create.go:108] docker network embed-certs-399565 192.168.94.0/24 created
	I1212 20:10:42.030769  295304 kic.go:121] calculated static IP "192.168.94.2" for the "embed-certs-399565" container
	I1212 20:10:42.030838  295304 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 20:10:42.050291  295304 cli_runner.go:164] Run: docker volume create embed-certs-399565 --label name.minikube.sigs.k8s.io=embed-certs-399565 --label created_by.minikube.sigs.k8s.io=true
	I1212 20:10:42.071788  295304 oci.go:103] Successfully created a docker volume embed-certs-399565
	I1212 20:10:42.071861  295304 cli_runner.go:164] Run: docker run --rm --name embed-certs-399565-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-399565 --entrypoint /usr/bin/test -v embed-certs-399565:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -d /var/lib
	I1212 20:10:44.346604  295304 cli_runner.go:217] Completed: docker run --rm --name embed-certs-399565-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-399565 --entrypoint /usr/bin/test -v embed-certs-399565:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -d /var/lib: (2.274699523s)
	I1212 20:10:44.346637  295304 oci.go:107] Successfully prepared a docker volume embed-certs-399565
	I1212 20:10:44.346715  295304 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:10:44.346730  295304 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 20:10:44.346802  295304 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-399565:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 20:10:44.809067  294089 machine.go:94] provisionDockerMachine start ...
	I1212 20:10:44.809263  294089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:10:44.842065  294089 main.go:143] libmachine: Using SSH client type: native
	I1212 20:10:44.842477  294089 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33084 <nil> <nil>}
	I1212 20:10:44.842497  294089 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 20:10:44.843444  294089 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40432->127.0.0.1:33084: read: connection reset by peer
	I1212 20:10:47.975203  294089 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-832562
	
	I1212 20:10:47.975231  294089 ubuntu.go:182] provisioning hostname "newest-cni-832562"
	I1212 20:10:47.975310  294089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:10:47.993972  294089 main.go:143] libmachine: Using SSH client type: native
	I1212 20:10:47.994267  294089 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33084 <nil> <nil>}
	I1212 20:10:47.994297  294089 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-832562 && echo "newest-cni-832562" | sudo tee /etc/hostname
	I1212 20:10:48.141363  294089 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-832562
	
	I1212 20:10:48.141439  294089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:10:48.158918  294089 main.go:143] libmachine: Using SSH client type: native
	I1212 20:10:48.159184  294089 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33084 <nil> <nil>}
	I1212 20:10:48.159209  294089 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-832562' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-832562/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-832562' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:10:48.292762  294089 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:10:48.292797  294089 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-5703/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-5703/.minikube}
	I1212 20:10:48.292835  294089 ubuntu.go:190] setting up certificates
	I1212 20:10:48.292849  294089 provision.go:84] configureAuth start
	I1212 20:10:48.292926  294089 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-832562
	I1212 20:10:48.326950  294089 provision.go:143] copyHostCerts
	I1212 20:10:48.327085  294089 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-5703/.minikube/ca.pem, removing ...
	I1212 20:10:48.327097  294089 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-5703/.minikube/ca.pem
	I1212 20:10:48.327202  294089 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-5703/.minikube/ca.pem (1078 bytes)
	I1212 20:10:48.327403  294089 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-5703/.minikube/cert.pem, removing ...
	I1212 20:10:48.327451  294089 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-5703/.minikube/cert.pem
	I1212 20:10:48.327515  294089 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-5703/.minikube/cert.pem (1123 bytes)
	I1212 20:10:48.327690  294089 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-5703/.minikube/key.pem, removing ...
	I1212 20:10:48.327712  294089 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-5703/.minikube/key.pem
	I1212 20:10:48.327775  294089 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-5703/.minikube/key.pem (1679 bytes)
	I1212 20:10:48.327876  294089 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-5703/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca-key.pem org=jenkins.newest-cni-832562 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-832562]
	I1212 20:10:48.367056  294089 provision.go:177] copyRemoteCerts
	I1212 20:10:48.367133  294089 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:10:48.367183  294089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:10:48.399463  294089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/newest-cni-832562/id_rsa Username:docker}
	I1212 20:10:48.513971  294089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 20:10:48.546352  294089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 20:10:48.571127  294089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 20:10:48.595279  294089 provision.go:87] duration metric: took 302.404372ms to configureAuth
	I1212 20:10:48.595350  294089 ubuntu.go:206] setting minikube options for container-runtime
	I1212 20:10:48.595544  294089 config.go:182] Loaded profile config "newest-cni-832562": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:10:48.595649  294089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:10:48.620052  294089 main.go:143] libmachine: Using SSH client type: native
	I1212 20:10:48.624360  294089 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33084 <nil> <nil>}
	I1212 20:10:48.624395  294089 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 20:10:48.972835  294089 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 20:10:48.972875  294089 machine.go:97] duration metric: took 4.163742047s to provisionDockerMachine
	I1212 20:10:48.972888  294089 client.go:176] duration metric: took 8.980014457s to LocalClient.Create
	I1212 20:10:48.972903  294089 start.go:167] duration metric: took 8.980078139s to libmachine.API.Create "newest-cni-832562"
	I1212 20:10:48.972912  294089 start.go:293] postStartSetup for "newest-cni-832562" (driver="docker")
	I1212 20:10:48.972936  294089 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:10:48.973009  294089 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:10:48.973054  294089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:10:49.009437  294089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/newest-cni-832562/id_rsa Username:docker}
	I1212 20:10:49.124458  294089 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:10:49.131608  294089 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 20:10:49.131741  294089 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 20:10:49.131757  294089 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-5703/.minikube/addons for local assets ...
	I1212 20:10:49.131816  294089 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-5703/.minikube/files for local assets ...
	I1212 20:10:49.131929  294089 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/ssl/certs/92542.pem -> 92542.pem in /etc/ssl/certs
	I1212 20:10:49.132080  294089 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 20:10:49.143096  294089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/ssl/certs/92542.pem --> /etc/ssl/certs/92542.pem (1708 bytes)
	I1212 20:10:49.169938  294089 start.go:296] duration metric: took 197.003748ms for postStartSetup
	I1212 20:10:49.170486  294089 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-832562
	I1212 20:10:49.199904  294089 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/newest-cni-832562/config.json ...
	I1212 20:10:49.200186  294089 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:10:49.200236  294089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:10:49.227743  294089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/newest-cni-832562/id_rsa Username:docker}
	I1212 20:10:49.333197  294089 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 20:10:49.339077  294089 start.go:128] duration metric: took 9.348989764s to createHost
	I1212 20:10:49.339101  294089 start.go:83] releasing machines lock for "newest-cni-832562", held for 9.349136792s
	I1212 20:10:49.339176  294089 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-832562
	I1212 20:10:49.362251  294089 ssh_runner.go:195] Run: cat /version.json
	I1212 20:10:49.362324  294089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:10:49.362392  294089 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 20:10:49.362478  294089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:10:49.386137  294089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/newest-cni-832562/id_rsa Username:docker}
	I1212 20:10:49.387340  294089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/newest-cni-832562/id_rsa Username:docker}
	I1212 20:10:49.571129  294089 ssh_runner.go:195] Run: systemctl --version
	I1212 20:10:49.580659  294089 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 20:10:49.627529  294089 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 20:10:49.632972  294089 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:10:49.633048  294089 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:10:49.665354  294089 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 20:10:49.665382  294089 start.go:496] detecting cgroup driver to use...
	I1212 20:10:49.665414  294089 detect.go:190] detected "systemd" cgroup driver on host os
	I1212 20:10:49.665470  294089 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:10:49.688502  294089 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:10:49.702505  294089 docker.go:218] disabling cri-docker service (if available) ...
	I1212 20:10:49.702556  294089 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 20:10:49.725660  294089 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 20:10:49.746744  294089 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	
	
	==> CRI-O <==
	Dec 12 20:10:11 no-preload-753103 crio[568]: time="2025-12-12T20:10:11.388342597Z" level=info msg="Started container" PID=1749 containerID=75edcde58fc96861c44c360a5a6bd38352a266d7d292cae849f7be39b6f191d5 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-s8zkt/dashboard-metrics-scraper id=7d881afc-c710-440f-85f0-503da6cd8667 name=/runtime.v1.RuntimeService/StartContainer sandboxID=464c42ffc45e8bce36830d801cb34bc423ba9b850124801dfcee890dcbdb3c0d
	Dec 12 20:10:11 no-preload-753103 crio[568]: time="2025-12-12T20:10:11.417498141Z" level=info msg="Removing container: ced759efb2e3ef497f4701340f3b3859c5ec17a2c9399385ff9b6b6b14ac5bea" id=d3bc9c6e-fda3-462c-ba0f-31103386154c name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 20:10:11 no-preload-753103 crio[568]: time="2025-12-12T20:10:11.426235725Z" level=info msg="Removed container ced759efb2e3ef497f4701340f3b3859c5ec17a2c9399385ff9b6b6b14ac5bea: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-s8zkt/dashboard-metrics-scraper" id=d3bc9c6e-fda3-462c-ba0f-31103386154c name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 20:10:25 no-preload-753103 crio[568]: time="2025-12-12T20:10:25.448193718Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=5dc28803-36d3-4aa5-a3aa-a6edc58cdc61 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:10:25 no-preload-753103 crio[568]: time="2025-12-12T20:10:25.44909664Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=92422fb3-4baa-43fd-aca2-4d35aaa1802f name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:10:25 no-preload-753103 crio[568]: time="2025-12-12T20:10:25.450154719Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=451ef8c7-1233-4db5-800d-36a25ee08479 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:10:25 no-preload-753103 crio[568]: time="2025-12-12T20:10:25.450321951Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:10:25 no-preload-753103 crio[568]: time="2025-12-12T20:10:25.454951598Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:10:25 no-preload-753103 crio[568]: time="2025-12-12T20:10:25.455149104Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/6f73925e5a554d88bce6cafdf2437ace7e354190f8550acba45466dba0294f17/merged/etc/passwd: no such file or directory"
	Dec 12 20:10:25 no-preload-753103 crio[568]: time="2025-12-12T20:10:25.455187559Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/6f73925e5a554d88bce6cafdf2437ace7e354190f8550acba45466dba0294f17/merged/etc/group: no such file or directory"
	Dec 12 20:10:25 no-preload-753103 crio[568]: time="2025-12-12T20:10:25.455509595Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:10:25 no-preload-753103 crio[568]: time="2025-12-12T20:10:25.486833467Z" level=info msg="Created container a853da2b8ba925006c9ac1b26606e6847abdb752cb7caedb7f1e059755fdab37: kube-system/storage-provisioner/storage-provisioner" id=451ef8c7-1233-4db5-800d-36a25ee08479 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:10:25 no-preload-753103 crio[568]: time="2025-12-12T20:10:25.487466804Z" level=info msg="Starting container: a853da2b8ba925006c9ac1b26606e6847abdb752cb7caedb7f1e059755fdab37" id=b893b40f-431c-4e3b-a6aa-a1799d150c3e name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 20:10:25 no-preload-753103 crio[568]: time="2025-12-12T20:10:25.48949518Z" level=info msg="Started container" PID=1763 containerID=a853da2b8ba925006c9ac1b26606e6847abdb752cb7caedb7f1e059755fdab37 description=kube-system/storage-provisioner/storage-provisioner id=b893b40f-431c-4e3b-a6aa-a1799d150c3e name=/runtime.v1.RuntimeService/StartContainer sandboxID=ec6a25173181706d291b64a15164c93ca81040b5ca46f4c3b71a095ff82184cf
	Dec 12 20:10:34 no-preload-753103 crio[568]: time="2025-12-12T20:10:34.342410238Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=74935609-47e7-4da0-a28e-965afeea54dd name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:10:34 no-preload-753103 crio[568]: time="2025-12-12T20:10:34.361739471Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8966b53e-c2ee-4dd3-952f-d1eeb7d7a0e4 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:10:34 no-preload-753103 crio[568]: time="2025-12-12T20:10:34.362839208Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-s8zkt/dashboard-metrics-scraper" id=980d2a4b-421d-42de-9ef5-12b93c11c877 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:10:34 no-preload-753103 crio[568]: time="2025-12-12T20:10:34.362977501Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:10:34 no-preload-753103 crio[568]: time="2025-12-12T20:10:34.394866791Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:10:34 no-preload-753103 crio[568]: time="2025-12-12T20:10:34.397309559Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:10:34 no-preload-753103 crio[568]: time="2025-12-12T20:10:34.655751102Z" level=info msg="Created container 93292299ed71aee8074161caf32dc608cef3f51f1addaf73da6ffe773de2495f: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-s8zkt/dashboard-metrics-scraper" id=980d2a4b-421d-42de-9ef5-12b93c11c877 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:10:34 no-preload-753103 crio[568]: time="2025-12-12T20:10:34.65656208Z" level=info msg="Starting container: 93292299ed71aee8074161caf32dc608cef3f51f1addaf73da6ffe773de2495f" id=a85c2ecb-9d0f-4151-bbfc-22aa7673af20 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 20:10:34 no-preload-753103 crio[568]: time="2025-12-12T20:10:34.659441901Z" level=info msg="Started container" PID=1799 containerID=93292299ed71aee8074161caf32dc608cef3f51f1addaf73da6ffe773de2495f description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-s8zkt/dashboard-metrics-scraper id=a85c2ecb-9d0f-4151-bbfc-22aa7673af20 name=/runtime.v1.RuntimeService/StartContainer sandboxID=464c42ffc45e8bce36830d801cb34bc423ba9b850124801dfcee890dcbdb3c0d
	Dec 12 20:10:35 no-preload-753103 crio[568]: time="2025-12-12T20:10:35.480500738Z" level=info msg="Removing container: 75edcde58fc96861c44c360a5a6bd38352a266d7d292cae849f7be39b6f191d5" id=36e67727-6b25-4ba8-9281-3364dd1f666e name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 20:10:35 no-preload-753103 crio[568]: time="2025-12-12T20:10:35.494443886Z" level=info msg="Removed container 75edcde58fc96861c44c360a5a6bd38352a266d7d292cae849f7be39b6f191d5: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-s8zkt/dashboard-metrics-scraper" id=36e67727-6b25-4ba8-9281-3364dd1f666e name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	93292299ed71a       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           16 seconds ago      Exited              dashboard-metrics-scraper   3                   464c42ffc45e8       dashboard-metrics-scraper-867fb5f87b-s8zkt   kubernetes-dashboard
	a853da2b8ba92       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           25 seconds ago      Running             storage-provisioner         1                   ec6a251731817       storage-provisioner                          kube-system
	163acd72ae860       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   46 seconds ago      Running             kubernetes-dashboard        0                   4b4b2cd38e785       kubernetes-dashboard-b84665fb8-7c9ms         kubernetes-dashboard
	c6a045838db0a       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           56 seconds ago      Running             busybox                     1                   02326a1ee4d5c       busybox                                      default
	c0fdb2aafae83       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           56 seconds ago      Running             coredns                     0                   009e1d9861588       coredns-7d764666f9-pbqw6                     kube-system
	1da232930b122       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           56 seconds ago      Running             kindnet-cni                 0                   a9f756711605f       kindnet-p4b57                                kube-system
	87e1fd79ab4c6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago      Exited              storage-provisioner         0                   ec6a251731817       storage-provisioner                          kube-system
	8694fb568f618       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                           56 seconds ago      Running             kube-proxy                  0                   12dbbb22b3094       kube-proxy-xn425                             kube-system
	0de1318190774       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                           59 seconds ago      Running             kube-apiserver              0                   912de1fc8bf49       kube-apiserver-no-preload-753103             kube-system
	452a3991e4df4       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                           59 seconds ago      Running             kube-controller-manager     0                   2db766f70d364       kube-controller-manager-no-preload-753103    kube-system
	3286e3a649780       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           59 seconds ago      Running             etcd                        0                   a145e3ed57b24       etcd-no-preload-753103                       kube-system
	249b72d350355       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                           59 seconds ago      Running             kube-scheduler              0                   3838b923e8187       kube-scheduler-no-preload-753103             kube-system
	
	
	==> coredns [c0fdb2aafae83a7764a44b93a09f4e725a31b95478d233fb8585e31d03e106f5] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:51450 - 62927 "HINFO IN 1495439786791351106.6762009502390536913. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.102620693s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-753103
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-753103
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300
	                    minikube.k8s.io/name=no-preload-753103
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T20_08_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 20:08:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-753103
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 20:10:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 20:10:24 +0000   Fri, 12 Dec 2025 20:08:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 20:10:24 +0000   Fri, 12 Dec 2025 20:08:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 20:10:24 +0000   Fri, 12 Dec 2025 20:08:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 20:10:24 +0000   Fri, 12 Dec 2025 20:09:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-753103
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c9831a2813dcaeb3cc17b596693b7bac
	  System UUID:                f5184786-74a4-443d-967a-ec8e68a8cf1e
	  Boot ID:                    50e046d9-33b0-4107-918f-f0a8d9513b10
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-7d764666f9-pbqw6                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     111s
	  kube-system                 etcd-no-preload-753103                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         119s
	  kube-system                 kindnet-p4b57                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-no-preload-753103              250m (3%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-no-preload-753103     200m (2%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-xn425                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-no-preload-753103              100m (1%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-s8zkt    0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-7c9ms          0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  112s  node-controller  Node no-preload-753103 event: Registered Node no-preload-753103 in Controller
	  Normal  RegisteredNode  55s   node-controller  Node no-preload-753103 event: Registered Node no-preload-753103 in Controller
	
	
	==> dmesg <==
	[  +0.086327] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026088] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.855140] kauditd_printk_skb: 47 callbacks suppressed
	[Dec12 19:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.016395] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023890] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023861] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023912] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023894] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +2.047754] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +4.031589] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +8.448131] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[Dec12 19:32] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[ +32.252714] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	
	
	==> etcd [3286e3a6497804378907ab37416b64a0519732034946847c95152a1d59829cc2] <==
	{"level":"warn","ts":"2025-12-12T20:09:53.118320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:09:53.125660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:09:53.138415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:09:53.145460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:09:53.153061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:09:53.159734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:09:53.166608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:09:53.173325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:09:53.180938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:09:53.188975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:09:53.195571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:09:53.202762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:09:53.209650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:09:53.227112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:09:53.239999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:09:53.246299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:09:53.298364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:34.847637Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"246.756104ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-12T20:10:34.847754Z","caller":"traceutil/trace.go:172","msg":"trace[1936535482] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:683; }","duration":"246.883249ms","start":"2025-12-12T20:10:34.600849Z","end":"2025-12-12T20:10:34.847733Z","steps":["trace[1936535482] 'agreement among raft nodes before linearized reading'  (duration: 55.462421ms)","trace[1936535482] 'range keys from in-memory index tree'  (duration: 191.221053ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-12T20:10:34.848250Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"191.332697ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597681555184421 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/no-preload-753103\" mod_revision:669 > success:<request_put:<key:\"/registry/leases/kube-node-lease/no-preload-753103\" value_size:499 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/no-preload-753103\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-12T20:10:34.848353Z","caller":"traceutil/trace.go:172","msg":"trace[167036480] linearizableReadLoop","detail":"{readStateIndex:722; appliedIndex:721; }","duration":"190.089114ms","start":"2025-12-12T20:10:34.658252Z","end":"2025-12-12T20:10:34.848341Z","steps":["trace[167036480] 'read index received'  (duration: 58.92µs)","trace[167036480] 'applied index is now lower than readState.Index'  (duration: 190.029277ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-12T20:10:34.848393Z","caller":"traceutil/trace.go:172","msg":"trace[1364742130] transaction","detail":"{read_only:false; response_revision:684; number_of_response:1; }","duration":"310.230363ms","start":"2025-12-12T20:10:34.538142Z","end":"2025-12-12T20:10:34.848373Z","steps":["trace[1364742130] 'process raft request'  (duration: 118.195191ms)","trace[1364742130] 'compare'  (duration: 191.22072ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-12T20:10:34.848466Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"190.21368ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-s8zkt.188090be26dc9010\" limit:1 ","response":"range_response_count:1 size:847"}
	{"level":"info","ts":"2025-12-12T20:10:34.848678Z","caller":"traceutil/trace.go:172","msg":"trace[1756200953] range","detail":"{range_begin:/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-s8zkt.188090be26dc9010; range_end:; response_count:1; response_revision:684; }","duration":"190.410619ms","start":"2025-12-12T20:10:34.658242Z","end":"2025-12-12T20:10:34.848653Z","steps":["trace[1756200953] 'agreement among raft nodes before linearized reading'  (duration: 190.133445ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-12T20:10:34.848635Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-12T20:10:34.538114Z","time spent":"310.442323ms","remote":"127.0.0.1:49350","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":557,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/no-preload-753103\" mod_revision:669 > success:<request_put:<key:\"/registry/leases/kube-node-lease/no-preload-753103\" value_size:499 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/no-preload-753103\" > >"}
	
	
	==> kernel <==
	 20:10:51 up 53 min,  0 user,  load average: 3.41, 2.09, 1.52
	Linux no-preload-753103 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1da232930b1225ba26cc77335c5fd77023b588fafd9332d44b052afc26a6740d] <==
	I1212 20:09:54.957391       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1212 20:09:54.957694       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1212 20:09:54.957883       1 main.go:148] setting mtu 1500 for CNI 
	I1212 20:09:54.957908       1 main.go:178] kindnetd IP family: "ipv4"
	I1212 20:09:54.957931       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-12T20:09:55Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1212 20:09:55.161323       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1212 20:09:55.161660       1 controller.go:381] "Waiting for informer caches to sync"
	I1212 20:09:55.161704       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1212 20:09:55.161830       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1212 20:09:55.462321       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1212 20:09:55.462350       1 metrics.go:72] Registering metrics
	I1212 20:09:55.462403       1 controller.go:711] "Syncing nftables rules"
	I1212 20:10:05.162017       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1212 20:10:05.162091       1 main.go:301] handling current node
	I1212 20:10:15.164373       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1212 20:10:15.164411       1 main.go:301] handling current node
	I1212 20:10:25.161521       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1212 20:10:25.161557       1 main.go:301] handling current node
	I1212 20:10:35.164349       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1212 20:10:35.164393       1 main.go:301] handling current node
	I1212 20:10:45.164398       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1212 20:10:45.164434       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0de13181907744cb32a821b90949248f3f382280f37f0ac21d7a4e83b8b9f488] <==
	I1212 20:09:53.772723       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1212 20:09:53.772831       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1212 20:09:53.773143       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1212 20:09:53.773165       1 aggregator.go:187] initial CRD sync complete...
	I1212 20:09:53.773175       1 autoregister_controller.go:144] Starting autoregister controller
	I1212 20:09:53.773180       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 20:09:53.773186       1 cache.go:39] Caches are synced for autoregister controller
	I1212 20:09:53.773339       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1212 20:09:53.779899       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1212 20:09:53.783023       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	E1212 20:09:53.787659       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1212 20:09:53.793344       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 20:09:53.836091       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 20:09:54.059824       1 controller.go:667] quota admission added evaluator for: namespaces
	I1212 20:09:54.085121       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1212 20:09:54.100432       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 20:09:54.105952       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 20:09:54.111538       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1212 20:09:54.140871       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.140.195"}
	I1212 20:09:54.149678       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.155.144"}
	I1212 20:09:54.675749       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1212 20:09:57.426745       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1212 20:09:57.426792       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1212 20:09:57.476828       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 20:09:57.527059       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [452a3991e4df436dd9d2ad0b08c3ffa20c78ded9ad019978d64bd40f23d993a8] <==
	I1212 20:09:56.930525       1 shared_informer.go:377] "Caches are synced"
	I1212 20:09:56.930777       1 shared_informer.go:377] "Caches are synced"
	I1212 20:09:56.929045       1 shared_informer.go:377] "Caches are synced"
	I1212 20:09:56.929982       1 shared_informer.go:377] "Caches are synced"
	I1212 20:09:56.931102       1 shared_informer.go:377] "Caches are synced"
	I1212 20:09:56.931136       1 shared_informer.go:377] "Caches are synced"
	I1212 20:09:56.931202       1 shared_informer.go:377] "Caches are synced"
	I1212 20:09:56.931225       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1212 20:09:56.931337       1 shared_informer.go:377] "Caches are synced"
	I1212 20:09:56.931392       1 shared_informer.go:377] "Caches are synced"
	I1212 20:09:56.931338       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-753103"
	I1212 20:09:56.931677       1 shared_informer.go:377] "Caches are synced"
	I1212 20:09:56.931677       1 shared_informer.go:377] "Caches are synced"
	I1212 20:09:56.931795       1 range_allocator.go:177] "Sending events to api server"
	I1212 20:09:56.931836       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1212 20:09:56.931845       1 shared_informer.go:370] "Waiting for caches to sync"
	I1212 20:09:56.931677       1 shared_informer.go:377] "Caches are synced"
	I1212 20:09:56.931677       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1212 20:09:56.931851       1 shared_informer.go:377] "Caches are synced"
	I1212 20:09:56.937055       1 shared_informer.go:370] "Waiting for caches to sync"
	I1212 20:09:56.941598       1 shared_informer.go:377] "Caches are synced"
	I1212 20:09:57.031482       1 shared_informer.go:377] "Caches are synced"
	I1212 20:09:57.031503       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1212 20:09:57.031509       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1212 20:09:57.038149       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [8694fb568f6184f280ef0979168c88307d2d2ce6abadf548201dab5907b1dec2] <==
	I1212 20:09:54.753063       1 server_linux.go:53] "Using iptables proxy"
	I1212 20:09:54.834368       1 shared_informer.go:370] "Waiting for caches to sync"
	I1212 20:09:54.934728       1 shared_informer.go:377] "Caches are synced"
	I1212 20:09:54.934760       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1212 20:09:54.934844       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 20:09:54.952754       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 20:09:54.952796       1 server_linux.go:136] "Using iptables Proxier"
	I1212 20:09:54.957539       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 20:09:54.957935       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1212 20:09:54.957955       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 20:09:54.959204       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 20:09:54.959228       1 config.go:200] "Starting service config controller"
	I1212 20:09:54.959241       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 20:09:54.959233       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 20:09:54.959289       1 config.go:106] "Starting endpoint slice config controller"
	I1212 20:09:54.959297       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 20:09:54.959308       1 config.go:309] "Starting node config controller"
	I1212 20:09:54.959316       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 20:09:54.959324       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 20:09:55.060340       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1212 20:09:55.060350       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1212 20:09:55.060390       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [249b72d350355577980958226dcfac379cd22975003283e5e7acd74458648cfc] <==
	I1212 20:09:52.012018       1 serving.go:386] Generated self-signed cert in-memory
	W1212 20:09:53.691459       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1212 20:09:53.691498       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 20:09:53.691511       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1212 20:09:53.691521       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1212 20:09:53.768096       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1212 20:09:53.768184       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 20:09:53.772256       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 20:09:53.772339       1 shared_informer.go:370] "Waiting for caches to sync"
	I1212 20:09:53.772891       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1212 20:09:53.772980       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1212 20:09:53.873379       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 12 20:10:10 no-preload-753103 kubelet[722]: E1212 20:10:10.034408     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-s8zkt_kubernetes-dashboard(f397e482-3c62-4935-9fa7-ad93318eb694)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-s8zkt" podUID="f397e482-3c62-4935-9fa7-ad93318eb694"
	Dec 12 20:10:11 no-preload-753103 kubelet[722]: E1212 20:10:11.342541     722 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-s8zkt" containerName="dashboard-metrics-scraper"
	Dec 12 20:10:11 no-preload-753103 kubelet[722]: I1212 20:10:11.342582     722 scope.go:122] "RemoveContainer" containerID="ced759efb2e3ef497f4701340f3b3859c5ec17a2c9399385ff9b6b6b14ac5bea"
	Dec 12 20:10:11 no-preload-753103 kubelet[722]: I1212 20:10:11.416243     722 scope.go:122] "RemoveContainer" containerID="ced759efb2e3ef497f4701340f3b3859c5ec17a2c9399385ff9b6b6b14ac5bea"
	Dec 12 20:10:11 no-preload-753103 kubelet[722]: E1212 20:10:11.416467     722 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-s8zkt" containerName="dashboard-metrics-scraper"
	Dec 12 20:10:11 no-preload-753103 kubelet[722]: I1212 20:10:11.416505     722 scope.go:122] "RemoveContainer" containerID="75edcde58fc96861c44c360a5a6bd38352a266d7d292cae849f7be39b6f191d5"
	Dec 12 20:10:11 no-preload-753103 kubelet[722]: E1212 20:10:11.416700     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-s8zkt_kubernetes-dashboard(f397e482-3c62-4935-9fa7-ad93318eb694)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-s8zkt" podUID="f397e482-3c62-4935-9fa7-ad93318eb694"
	Dec 12 20:10:20 no-preload-753103 kubelet[722]: E1212 20:10:20.034025     722 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-s8zkt" containerName="dashboard-metrics-scraper"
	Dec 12 20:10:20 no-preload-753103 kubelet[722]: I1212 20:10:20.034061     722 scope.go:122] "RemoveContainer" containerID="75edcde58fc96861c44c360a5a6bd38352a266d7d292cae849f7be39b6f191d5"
	Dec 12 20:10:20 no-preload-753103 kubelet[722]: E1212 20:10:20.034238     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-s8zkt_kubernetes-dashboard(f397e482-3c62-4935-9fa7-ad93318eb694)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-s8zkt" podUID="f397e482-3c62-4935-9fa7-ad93318eb694"
	Dec 12 20:10:25 no-preload-753103 kubelet[722]: I1212 20:10:25.447727     722 scope.go:122] "RemoveContainer" containerID="87e1fd79ab4c6e57e3cb839d5f0fa3669a8136cd2e22b70a224ec70cb69bc6d0"
	Dec 12 20:10:31 no-preload-753103 kubelet[722]: E1212 20:10:31.326382     722 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-pbqw6" containerName="coredns"
	Dec 12 20:10:34 no-preload-753103 kubelet[722]: E1212 20:10:34.341835     722 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-s8zkt" containerName="dashboard-metrics-scraper"
	Dec 12 20:10:34 no-preload-753103 kubelet[722]: I1212 20:10:34.341871     722 scope.go:122] "RemoveContainer" containerID="75edcde58fc96861c44c360a5a6bd38352a266d7d292cae849f7be39b6f191d5"
	Dec 12 20:10:35 no-preload-753103 kubelet[722]: I1212 20:10:35.478171     722 scope.go:122] "RemoveContainer" containerID="75edcde58fc96861c44c360a5a6bd38352a266d7d292cae849f7be39b6f191d5"
	Dec 12 20:10:35 no-preload-753103 kubelet[722]: E1212 20:10:35.478587     722 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-s8zkt" containerName="dashboard-metrics-scraper"
	Dec 12 20:10:35 no-preload-753103 kubelet[722]: I1212 20:10:35.478611     722 scope.go:122] "RemoveContainer" containerID="93292299ed71aee8074161caf32dc608cef3f51f1addaf73da6ffe773de2495f"
	Dec 12 20:10:35 no-preload-753103 kubelet[722]: E1212 20:10:35.478785     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-s8zkt_kubernetes-dashboard(f397e482-3c62-4935-9fa7-ad93318eb694)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-s8zkt" podUID="f397e482-3c62-4935-9fa7-ad93318eb694"
	Dec 12 20:10:40 no-preload-753103 kubelet[722]: E1212 20:10:40.033902     722 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-s8zkt" containerName="dashboard-metrics-scraper"
	Dec 12 20:10:40 no-preload-753103 kubelet[722]: I1212 20:10:40.033949     722 scope.go:122] "RemoveContainer" containerID="93292299ed71aee8074161caf32dc608cef3f51f1addaf73da6ffe773de2495f"
	Dec 12 20:10:40 no-preload-753103 kubelet[722]: E1212 20:10:40.034146     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-s8zkt_kubernetes-dashboard(f397e482-3c62-4935-9fa7-ad93318eb694)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-s8zkt" podUID="f397e482-3c62-4935-9fa7-ad93318eb694"
	Dec 12 20:10:45 no-preload-753103 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 12 20:10:45 no-preload-753103 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 12 20:10:45 no-preload-753103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:10:45 no-preload-753103 systemd[1]: kubelet.service: Consumed 1.665s CPU time.
	
	
	==> kubernetes-dashboard [163acd72ae86023a3eae1b09074158d0b11755431dd837cc567bffd051dfb67d] <==
	2025/12/12 20:10:04 Using namespace: kubernetes-dashboard
	2025/12/12 20:10:04 Using in-cluster config to connect to apiserver
	2025/12/12 20:10:04 Using secret token for csrf signing
	2025/12/12 20:10:04 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/12 20:10:04 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/12 20:10:04 Successful initial request to the apiserver, version: v1.35.0-beta.0
	2025/12/12 20:10:04 Generating JWE encryption key
	2025/12/12 20:10:04 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/12 20:10:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/12 20:10:04 Initializing JWE encryption key from synchronized object
	2025/12/12 20:10:04 Creating in-cluster Sidecar client
	2025/12/12 20:10:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/12 20:10:04 Serving insecurely on HTTP port: 9090
	2025/12/12 20:10:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/12 20:10:04 Starting overwatch
	
	
	==> storage-provisioner [87e1fd79ab4c6e57e3cb839d5f0fa3669a8136cd2e22b70a224ec70cb69bc6d0] <==
	I1212 20:09:54.721076       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1212 20:10:24.727862       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a853da2b8ba925006c9ac1b26606e6847abdb752cb7caedb7f1e059755fdab37] <==
	I1212 20:10:25.503254       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 20:10:25.511240       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 20:10:25.511299       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1212 20:10:25.513253       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:10:28.968003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:10:33.228529       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:10:36.827374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:10:39.881237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:10:42.903468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:10:42.935944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1212 20:10:42.936120       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 20:10:42.936224       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0a4a2be5-48cb-4e82-81d6-ec5f27edd4fa", APIVersion:"v1", ResourceVersion:"692", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-753103_a2b4372c-558e-41cd-8913-ee3477ff0de7 became leader
	I1212 20:10:42.936307       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-753103_a2b4372c-558e-41cd-8913-ee3477ff0de7!
	W1212 20:10:42.938406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:10:43.006607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1212 20:10:43.036967       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-753103_a2b4372c-558e-41cd-8913-ee3477ff0de7!
	W1212 20:10:45.037053       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:10:45.131349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:10:47.134711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:10:47.172705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:10:49.178774       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:10:49.186587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:10:51.190106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:10:51.194877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-753103 -n no-preload-753103
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-753103 -n no-preload-753103: exit status 2 (344.375709ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-753103 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (7.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-832562 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-832562 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (257.475449ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:11:07Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-832562 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-832562
helpers_test.go:244: (dbg) docker inspect newest-cni-832562:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2b8b85447870a65328af6bff8a5fc0386a7f78d18530ae3dd075c8b98c68fdcd",
	        "Created": "2025-12-12T20:10:44.178344468Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 296028,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T20:10:44.22675218Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ca69fb46d667873e297ff8975852d6be818eb624529e750e2b09ff0d3b0c367",
	        "ResolvConfPath": "/var/lib/docker/containers/2b8b85447870a65328af6bff8a5fc0386a7f78d18530ae3dd075c8b98c68fdcd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2b8b85447870a65328af6bff8a5fc0386a7f78d18530ae3dd075c8b98c68fdcd/hostname",
	        "HostsPath": "/var/lib/docker/containers/2b8b85447870a65328af6bff8a5fc0386a7f78d18530ae3dd075c8b98c68fdcd/hosts",
	        "LogPath": "/var/lib/docker/containers/2b8b85447870a65328af6bff8a5fc0386a7f78d18530ae3dd075c8b98c68fdcd/2b8b85447870a65328af6bff8a5fc0386a7f78d18530ae3dd075c8b98c68fdcd-json.log",
	        "Name": "/newest-cni-832562",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-832562:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-832562",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2b8b85447870a65328af6bff8a5fc0386a7f78d18530ae3dd075c8b98c68fdcd",
	                "LowerDir": "/var/lib/docker/overlay2/31f493d46db95581b1e542e90a5e9ebb6d2f9f3cb581088f2c1a7fe49a4c1d63-init/diff:/var/lib/docker/overlay2/87d8f1d6be832394c20ec55eb084b3f4a084ca786856584bd9e90cdb45432d3c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/31f493d46db95581b1e542e90a5e9ebb6d2f9f3cb581088f2c1a7fe49a4c1d63/merged",
	                "UpperDir": "/var/lib/docker/overlay2/31f493d46db95581b1e542e90a5e9ebb6d2f9f3cb581088f2c1a7fe49a4c1d63/diff",
	                "WorkDir": "/var/lib/docker/overlay2/31f493d46db95581b1e542e90a5e9ebb6d2f9f3cb581088f2c1a7fe49a4c1d63/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-832562",
	                "Source": "/var/lib/docker/volumes/newest-cni-832562/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-832562",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-832562",
	                "name.minikube.sigs.k8s.io": "newest-cni-832562",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "06ca7b2f3233d1b8b9a78eb995dda1cec994157952e9971f2b3a1d9ac37f980a",
	            "SandboxKey": "/var/run/docker/netns/06ca7b2f3233",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-832562": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5b0e30eb6e7a6239611b037a06cb38c24c42431a49eddf41a41622bd55f96edd",
	                    "EndpointID": "f98b542be5336006528ca91b2e8222f75e85fa8c87c24c48f31103c4b3b26d2b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "da:09:b2:c6:99:ea",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-832562",
	                        "2b8b85447870"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-832562 -n newest-cni-832562
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-832562 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ stop    │ -p old-k8s-version-824670 --alsologtostderr -v=3                                                                                                                                                                                                     │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:09 UTC │
	│ addons  │ enable metrics-server -p no-preload-753103 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │                     │
	│ stop    │ -p no-preload-753103 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:09 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-824670 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:09 UTC │
	│ start   │ -p old-k8s-version-824670 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:10 UTC │
	│ addons  │ enable dashboard -p no-preload-753103 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:09 UTC │
	│ start   │ -p no-preload-753103 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:10 UTC │
	│ delete  │ -p stopped-upgrade-180826                                                                                                                                                                                                                            │ stopped-upgrade-180826       │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ start   │ -p kubernetes-upgrade-991615 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                    │ kubernetes-upgrade-991615    │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ start   │ -p kubernetes-upgrade-991615 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-991615    │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ start   │ -p default-k8s-diff-port-433034 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-433034 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ image   │ old-k8s-version-824670 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ pause   │ -p old-k8s-version-824670 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-991615                                                                                                                                                                                                                         │ kubernetes-upgrade-991615    │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ delete  │ -p old-k8s-version-824670                                                                                                                                                                                                                            │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ start   │ -p newest-cni-832562 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-832562            │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:11 UTC │
	│ delete  │ -p old-k8s-version-824670                                                                                                                                                                                                                            │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ delete  │ -p disable-driver-mounts-044739                                                                                                                                                                                                                      │ disable-driver-mounts-044739 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ start   │ -p embed-certs-399565 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-399565           │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ image   │ no-preload-753103 image list --format=json                                                                                                                                                                                                           │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ pause   │ -p no-preload-753103 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ delete  │ -p no-preload-753103                                                                                                                                                                                                                                 │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ delete  │ -p no-preload-753103                                                                                                                                                                                                                                 │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ start   │ -p auto-789448 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                              │ auto-789448                  │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-832562 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-832562            │ jenkins │ v1.37.0 │ 12 Dec 25 20:11 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:10:55
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:10:55.270613  301411 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:10:55.270759  301411 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:10:55.270766  301411 out.go:374] Setting ErrFile to fd 2...
	I1212 20:10:55.270771  301411 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:10:55.270998  301411 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 20:10:55.271491  301411 out.go:368] Setting JSON to false
	I1212 20:10:55.272713  301411 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3202,"bootTime":1765567053,"procs":293,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 20:10:55.272800  301411 start.go:143] virtualization: kvm guest
	I1212 20:10:55.275028  301411 out.go:179] * [auto-789448] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 20:10:55.277173  301411 notify.go:221] Checking for updates...
	I1212 20:10:55.277712  301411 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:10:55.279116  301411 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:10:55.280713  301411 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 20:10:55.281809  301411 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-5703/.minikube
	I1212 20:10:55.283469  301411 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 20:10:55.285800  301411 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:10:55.287247  301411 config.go:182] Loaded profile config "default-k8s-diff-port-433034": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:10:55.287353  301411 config.go:182] Loaded profile config "embed-certs-399565": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:10:55.287463  301411 config.go:182] Loaded profile config "newest-cni-832562": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:10:55.287544  301411 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:10:55.317658  301411 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1212 20:10:55.317759  301411 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:10:55.384971  301411 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-12 20:10:55.374793433 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 20:10:55.385098  301411 docker.go:319] overlay module found
	I1212 20:10:55.386717  301411 out.go:179] * Using the docker driver based on user configuration
	I1212 20:10:55.387694  301411 start.go:309] selected driver: docker
	I1212 20:10:55.387709  301411 start.go:927] validating driver "docker" against <nil>
	I1212 20:10:55.387720  301411 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:10:55.388249  301411 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:10:55.457613  301411 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-12 20:10:55.447291924 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 20:10:55.457773  301411 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 20:10:55.458013  301411 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 20:10:55.459667  301411 out.go:179] * Using Docker driver with root privileges
	I1212 20:10:55.460670  301411 cni.go:84] Creating CNI manager for ""
	I1212 20:10:55.460745  301411 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:10:55.460758  301411 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 20:10:55.460849  301411 start.go:353] cluster config:
	{Name:auto-789448 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-789448 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1212 20:10:55.461975  301411 out.go:179] * Starting "auto-789448" primary control-plane node in "auto-789448" cluster
	I1212 20:10:55.462905  301411 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 20:10:55.463937  301411 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 20:10:55.464838  301411 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:10:55.464869  301411 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1212 20:10:55.464878  301411 cache.go:65] Caching tarball of preloaded images
	I1212 20:10:55.464939  301411 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 20:10:55.464974  301411 preload.go:238] Found /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 20:10:55.464988  301411 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 20:10:55.465099  301411 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/auto-789448/config.json ...
	I1212 20:10:55.465128  301411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/auto-789448/config.json: {Name:mk97da60148df04a5e6a63c8230e2407b23b130c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:55.489583  301411 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 20:10:55.489603  301411 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 20:10:55.489622  301411 cache.go:243] Successfully downloaded all kic artifacts
	I1212 20:10:55.489652  301411 start.go:360] acquireMachinesLock for auto-789448: {Name:mk7b916b39ef935b3e88942e38376e97a92b6bfd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:10:55.489754  301411 start.go:364] duration metric: took 83.416µs to acquireMachinesLock for "auto-789448"
	I1212 20:10:55.489785  301411 start.go:93] Provisioning new machine with config: &{Name:auto-789448 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-789448 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:10:55.489889  301411 start.go:125] createHost starting for "" (driver="docker")
	I1212 20:10:53.601386  289770 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 20:10:53.605865  289770 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1212 20:10:53.605883  289770 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1212 20:10:53.618620  289770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 20:10:53.883175  289770 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 20:10:53.883253  289770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:53.883373  289770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-433034 minikube.k8s.io/updated_at=2025_12_12T20_10_53_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300 minikube.k8s.io/name=default-k8s-diff-port-433034 minikube.k8s.io/primary=true
	I1212 20:10:53.899974  289770 ops.go:34] apiserver oom_adj: -16
	I1212 20:10:53.984427  289770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:54.484727  289770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:54.985229  289770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:55.485442  289770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:55.986683  289770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:54.753440  295304 cli_runner.go:164] Run: docker network inspect embed-certs-399565 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:10:54.770794  295304 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1212 20:10:54.775184  295304 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:10:54.786152  295304 kubeadm.go:884] updating cluster {Name:embed-certs-399565 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-399565 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 20:10:54.786312  295304 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:10:54.786377  295304 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:10:54.845045  295304 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:10:54.845064  295304 crio.go:433] Images already preloaded, skipping extraction
	I1212 20:10:54.845106  295304 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:10:54.869942  295304 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:10:54.869960  295304 cache_images.go:86] Images are preloaded, skipping loading
	I1212 20:10:54.869968  295304 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.2 crio true true} ...
	I1212 20:10:54.870055  295304 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-399565 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-399565 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 20:10:54.870113  295304 ssh_runner.go:195] Run: crio config
	I1212 20:10:54.920949  295304 cni.go:84] Creating CNI manager for ""
	I1212 20:10:54.920975  295304 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:10:54.920993  295304 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 20:10:54.921020  295304 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-399565 NodeName:embed-certs-399565 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:10:54.921182  295304 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-399565"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:10:54.921249  295304 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 20:10:54.929450  295304 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 20:10:54.929500  295304 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 20:10:54.937380  295304 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1212 20:10:54.952060  295304 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 20:10:54.968830  295304 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1212 20:10:54.981171  295304 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1212 20:10:54.985154  295304 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:10:55.001048  295304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:10:55.098652  295304 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:10:55.118588  295304 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/embed-certs-399565 for IP: 192.168.94.2
	I1212 20:10:55.118603  295304 certs.go:195] generating shared ca certs ...
	I1212 20:10:55.118620  295304 certs.go:227] acquiring lock for ca certs: {Name:mk2b2b547b64ae18c808f2dedd3d3cb8fa4a59ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:55.118766  295304 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-5703/.minikube/ca.key
	I1212 20:10:55.118827  295304 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.key
	I1212 20:10:55.118840  295304 certs.go:257] generating profile certs ...
	I1212 20:10:55.118912  295304 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/embed-certs-399565/client.key
	I1212 20:10:55.118934  295304 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/embed-certs-399565/client.crt with IP's: []
	I1212 20:10:55.400465  295304 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/embed-certs-399565/client.crt ...
	I1212 20:10:55.400495  295304 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/embed-certs-399565/client.crt: {Name:mkc6b358d9525e689e2115348c8fbcecab3d6952 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:55.400683  295304 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/embed-certs-399565/client.key ...
	I1212 20:10:55.400703  295304 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/embed-certs-399565/client.key: {Name:mk557115a3efd91aa24c98d7cdcdc2f7c4d9fae6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:55.400845  295304 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/embed-certs-399565/apiserver.key.392f528c
	I1212 20:10:55.400875  295304 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/embed-certs-399565/apiserver.crt.392f528c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1212 20:10:55.470668  295304 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/embed-certs-399565/apiserver.crt.392f528c ...
	I1212 20:10:55.470691  295304 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/embed-certs-399565/apiserver.crt.392f528c: {Name:mk82ce2a545b08b395454e83832d5e0d574e47f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:55.470842  295304 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/embed-certs-399565/apiserver.key.392f528c ...
	I1212 20:10:55.470859  295304 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/embed-certs-399565/apiserver.key.392f528c: {Name:mkeeb93610dd4295e121040ccb03e8130cbe9325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:55.470973  295304 certs.go:382] copying /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/embed-certs-399565/apiserver.crt.392f528c -> /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/embed-certs-399565/apiserver.crt
	I1212 20:10:55.471075  295304 certs.go:386] copying /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/embed-certs-399565/apiserver.key.392f528c -> /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/embed-certs-399565/apiserver.key
	I1212 20:10:55.471163  295304 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/embed-certs-399565/proxy-client.key
	I1212 20:10:55.471212  295304 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/embed-certs-399565/proxy-client.crt with IP's: []
	I1212 20:10:55.588656  295304 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/embed-certs-399565/proxy-client.crt ...
	I1212 20:10:55.588685  295304 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/embed-certs-399565/proxy-client.crt: {Name:mke1bfd8611b70377c16d53ed298dfd62a43f730 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:55.588856  295304 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/embed-certs-399565/proxy-client.key ...
	I1212 20:10:55.588872  295304 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/embed-certs-399565/proxy-client.key: {Name:mk9c5c10c3291347d6a8e973c08457451effddf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:55.589097  295304 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/9254.pem (1338 bytes)
	W1212 20:10:55.589146  295304 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-5703/.minikube/certs/9254_empty.pem, impossibly tiny 0 bytes
	I1212 20:10:55.589160  295304 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 20:10:55.589194  295304 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem (1078 bytes)
	I1212 20:10:55.589230  295304 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:10:55.589258  295304 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/key.pem (1679 bytes)
	I1212 20:10:55.589325  295304 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/ssl/certs/92542.pem (1708 bytes)
	I1212 20:10:55.590110  295304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:10:55.609973  295304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 20:10:55.627735  295304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:10:55.646697  295304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 20:10:55.665446  295304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/embed-certs-399565/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1212 20:10:55.684690  295304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/embed-certs-399565/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 20:10:55.703520  295304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/embed-certs-399565/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:10:55.720306  295304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/embed-certs-399565/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 20:10:55.740097  295304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/certs/9254.pem --> /usr/share/ca-certificates/9254.pem (1338 bytes)
	I1212 20:10:55.764302  295304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/ssl/certs/92542.pem --> /usr/share/ca-certificates/92542.pem (1708 bytes)
	I1212 20:10:55.781389  295304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:10:55.800228  295304 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 20:10:55.813255  295304 ssh_runner.go:195] Run: openssl version
	I1212 20:10:55.819571  295304 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/92542.pem
	I1212 20:10:55.827959  295304 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/92542.pem /etc/ssl/certs/92542.pem
	I1212 20:10:55.835805  295304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92542.pem
	I1212 20:10:55.840101  295304 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:38 /usr/share/ca-certificates/92542.pem
	I1212 20:10:55.840173  295304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92542.pem
	I1212 20:10:55.876793  295304 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 20:10:55.886130  295304 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/92542.pem /etc/ssl/certs/3ec20f2e.0
	I1212 20:10:55.900175  295304 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:10:55.909254  295304 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 20:10:55.917097  295304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:10:55.921306  295304 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:10:55.921358  295304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:10:55.963600  295304 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 20:10:55.993696  295304 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 20:10:56.002737  295304 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9254.pem
	I1212 20:10:56.019376  295304 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9254.pem /etc/ssl/certs/9254.pem
	I1212 20:10:56.032053  295304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9254.pem
	I1212 20:10:56.039511  295304 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:38 /usr/share/ca-certificates/9254.pem
	I1212 20:10:56.039594  295304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9254.pem
	I1212 20:10:56.095034  295304 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 20:10:56.104081  295304 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9254.pem /etc/ssl/certs/51391683.0
	I1212 20:10:56.113129  295304 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:10:56.117693  295304 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 20:10:56.117749  295304 kubeadm.go:401] StartCluster: {Name:embed-certs-399565 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-399565 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:10:56.117834  295304 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:10:56.117911  295304 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:10:56.153269  295304 cri.go:89] found id: ""
	I1212 20:10:56.153439  295304 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 20:10:56.164030  295304 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 20:10:56.174129  295304 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 20:10:56.174175  295304 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:10:56.183523  295304 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 20:10:56.183541  295304 kubeadm.go:158] found existing configuration files:
	
	I1212 20:10:56.183588  295304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 20:10:56.193028  295304 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 20:10:56.193087  295304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 20:10:56.203009  295304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 20:10:56.213603  295304 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 20:10:56.213670  295304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:10:56.221516  295304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 20:10:56.230250  295304 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 20:10:56.230320  295304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:10:56.240154  295304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 20:10:56.249856  295304 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 20:10:56.249924  295304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:10:56.258630  295304 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 20:10:56.316218  295304 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1212 20:10:56.316394  295304 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 20:10:56.339625  295304 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 20:10:56.339717  295304 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1212 20:10:56.339769  295304 kubeadm.go:319] OS: Linux
	I1212 20:10:56.339835  295304 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 20:10:56.339913  295304 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 20:10:56.339999  295304 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 20:10:56.340125  295304 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 20:10:56.340203  295304 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 20:10:56.340327  295304 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 20:10:56.340399  295304 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 20:10:56.340468  295304 kubeadm.go:319] CGROUPS_IO: enabled
	I1212 20:10:56.413320  295304 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 20:10:56.413505  295304 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 20:10:56.413658  295304 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 20:10:56.421511  295304 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 20:10:56.425039  295304 out.go:252]   - Generating certificates and keys ...
	I1212 20:10:56.425154  295304 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 20:10:56.425241  295304 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 20:10:56.485004  289770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:56.984562  289770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:57.485402  289770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:57.984682  289770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:58.065944  289770 kubeadm.go:1114] duration metric: took 4.182754322s to wait for elevateKubeSystemPrivileges
	I1212 20:10:58.065975  289770 kubeadm.go:403] duration metric: took 15.249374442s to StartCluster
	I1212 20:10:58.065996  289770 settings.go:142] acquiring lock: {Name:mk7b47e673a4fe47e1d03b804f21c4b8c19bdcb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:58.066074  289770 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 20:10:58.066807  289770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/kubeconfig: {Name:mkf945a9079d724e4b1deff5cd122f8b2719dc1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:58.067085  289770 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 20:10:58.067078  289770 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:10:58.067101  289770 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 20:10:58.067179  289770 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-433034"
	I1212 20:10:58.067195  289770 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-433034"
	I1212 20:10:58.067222  289770 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-433034"
	I1212 20:10:58.067237  289770 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-433034"
	I1212 20:10:58.067218  289770 host.go:66] Checking if "default-k8s-diff-port-433034" exists ...
	I1212 20:10:58.067304  289770 config.go:182] Loaded profile config "default-k8s-diff-port-433034": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:10:58.067578  289770 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-433034 --format={{.State.Status}}
	I1212 20:10:58.067880  289770 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-433034 --format={{.State.Status}}
	I1212 20:10:58.069976  289770 out.go:179] * Verifying Kubernetes components...
	I1212 20:10:58.071690  289770 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:10:58.100435  289770 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:10:54.950004  294089 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.700544ms
	I1212 20:10:54.953712  294089 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1212 20:10:54.953839  294089 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1212 20:10:54.953978  294089 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1212 20:10:54.954088  294089 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1212 20:10:55.960535  294089 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.006781332s
	I1212 20:10:57.145006  294089 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.191387116s
	I1212 20:10:58.101605  289770 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:10:58.101624  289770 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 20:10:58.101681  289770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-433034
	I1212 20:10:58.102378  289770 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-433034"
	I1212 20:10:58.102425  289770 host.go:66] Checking if "default-k8s-diff-port-433034" exists ...
	I1212 20:10:58.102892  289770 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-433034 --format={{.State.Status}}
	I1212 20:10:58.131548  289770 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 20:10:58.131572  289770 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 20:10:58.131627  289770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-433034
	I1212 20:10:58.132905  289770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/default-k8s-diff-port-433034/id_rsa Username:docker}
	I1212 20:10:58.165918  289770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/default-k8s-diff-port-433034/id_rsa Username:docker}
	I1212 20:10:58.203808  289770 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 20:10:58.290353  289770 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:10:58.326667  289770 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:10:58.327067  289770 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:10:58.506943  289770 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1212 20:10:58.507907  289770 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-433034" to be "Ready" ...
	I1212 20:10:59.127373  289770 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-433034" context rescaled to 1 replicas
	I1212 20:11:00.118937  289770 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.791835911s)
	I1212 20:11:00.242653  289770 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1212 20:10:55.492362  301411 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1212 20:10:55.492635  301411 start.go:159] libmachine.API.Create for "auto-789448" (driver="docker")
	I1212 20:10:55.492668  301411 client.go:173] LocalClient.Create starting
	I1212 20:10:55.492716  301411 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem
	I1212 20:10:55.492748  301411 main.go:143] libmachine: Decoding PEM data...
	I1212 20:10:55.492772  301411 main.go:143] libmachine: Parsing certificate...
	I1212 20:10:55.492872  301411 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem
	I1212 20:10:55.492908  301411 main.go:143] libmachine: Decoding PEM data...
	I1212 20:10:55.492924  301411 main.go:143] libmachine: Parsing certificate...
	I1212 20:10:55.493371  301411 cli_runner.go:164] Run: docker network inspect auto-789448 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 20:10:55.513833  301411 cli_runner.go:211] docker network inspect auto-789448 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 20:10:55.513905  301411 network_create.go:284] running [docker network inspect auto-789448] to gather additional debugging logs...
	I1212 20:10:55.513927  301411 cli_runner.go:164] Run: docker network inspect auto-789448
	W1212 20:10:55.537340  301411 cli_runner.go:211] docker network inspect auto-789448 returned with exit code 1
	I1212 20:10:55.537388  301411 network_create.go:287] error running [docker network inspect auto-789448]: docker network inspect auto-789448: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-789448 not found
	I1212 20:10:55.537408  301411 network_create.go:289] output of [docker network inspect auto-789448]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-789448 not found
	
	** /stderr **
	I1212 20:10:55.537536  301411 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:10:55.559769  301411 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-74442dadd84e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:ff:80:da:a9:72} reservation:<nil>}
	I1212 20:10:55.560670  301411 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-26148288ab51 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:82:49:cc:21:29:a7} reservation:<nil>}
	I1212 20:10:55.561550  301411 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3684d3b926aa IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2a:5e:c7:18:99:d2} reservation:<nil>}
	I1212 20:10:55.562130  301411 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-5b0e30eb6e7a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:06:34:8e:df:07:77} reservation:<nil>}
	I1212 20:10:55.562932  301411 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ea1b00}
	I1212 20:10:55.562958  301411 network_create.go:124] attempt to create docker network auto-789448 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1212 20:10:55.563004  301411 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-789448 auto-789448
	I1212 20:10:55.613362  301411 network_create.go:108] docker network auto-789448 192.168.85.0/24 created
	I1212 20:10:55.613391  301411 kic.go:121] calculated static IP "192.168.85.2" for the "auto-789448" container
	I1212 20:10:55.613452  301411 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 20:10:55.632513  301411 cli_runner.go:164] Run: docker volume create auto-789448 --label name.minikube.sigs.k8s.io=auto-789448 --label created_by.minikube.sigs.k8s.io=true
	I1212 20:10:55.650877  301411 oci.go:103] Successfully created a docker volume auto-789448
	I1212 20:10:55.650962  301411 cli_runner.go:164] Run: docker run --rm --name auto-789448-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-789448 --entrypoint /usr/bin/test -v auto-789448:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -d /var/lib
	I1212 20:10:56.050839  301411 oci.go:107] Successfully prepared a docker volume auto-789448
	I1212 20:10:56.050921  301411 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:10:56.050933  301411 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 20:10:56.051004  301411 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-789448:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 20:11:00.262749  301411 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-789448:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -I lz4 -xf /preloaded.tar -C /extractDir: (4.211690373s)
	I1212 20:11:00.262781  301411 kic.go:203] duration metric: took 4.211844014s to extract preloaded images to volume ...
	W1212 20:11:00.262889  301411 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1212 20:11:00.262949  301411 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1212 20:11:00.263002  301411 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 20:11:00.957491  294089 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.002538592s
	I1212 20:11:00.979146  294089 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 20:11:01.001618  294089 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 20:11:01.011244  294089 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 20:11:01.011454  294089 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-832562 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 20:11:01.021468  294089 kubeadm.go:319] [bootstrap-token] Using token: jarada.w7yro49wda6o4kh5
	I1212 20:11:00.244112  289770 addons.go:530] duration metric: took 2.177002055s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1212 20:11:00.511701  289770 node_ready.go:57] node "default-k8s-diff-port-433034" has "Ready":"False" status (will retry)
	I1212 20:10:57.294181  295304 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 20:10:57.414136  295304 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 20:10:57.657525  295304 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 20:10:57.745892  295304 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 20:10:57.931856  295304 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 20:10:57.932038  295304 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-399565 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1212 20:10:58.640246  295304 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 20:10:58.640533  295304 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-399565 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1212 20:10:59.021215  295304 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 20:10:59.164683  295304 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 20:10:59.678227  295304 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 20:10:59.678330  295304 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 20:11:00.575416  295304 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 20:11:00.696509  295304 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 20:11:01.101178  295304 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 20:11:01.312989  295304 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 20:11:01.744625  295304 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 20:11:01.745164  295304 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 20:11:01.748555  295304 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 20:11:01.028814  294089 out.go:252]   - Configuring RBAC rules ...
	I1212 20:11:01.028958  294089 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 20:11:01.030583  294089 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 20:11:01.036460  294089 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 20:11:01.039164  294089 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 20:11:01.041821  294089 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 20:11:01.044909  294089 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 20:11:01.362048  294089 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 20:11:01.779833  294089 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1212 20:11:02.362733  294089 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1212 20:11:02.363945  294089 kubeadm.go:319] 
	I1212 20:11:02.364040  294089 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1212 20:11:02.364051  294089 kubeadm.go:319] 
	I1212 20:11:02.364142  294089 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1212 20:11:02.364151  294089 kubeadm.go:319] 
	I1212 20:11:02.364213  294089 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1212 20:11:02.364326  294089 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 20:11:02.364409  294089 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 20:11:02.364418  294089 kubeadm.go:319] 
	I1212 20:11:02.364511  294089 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1212 20:11:02.364529  294089 kubeadm.go:319] 
	I1212 20:11:02.364597  294089 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 20:11:02.364606  294089 kubeadm.go:319] 
	I1212 20:11:02.364687  294089 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1212 20:11:02.364793  294089 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 20:11:02.364892  294089 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 20:11:02.364901  294089 kubeadm.go:319] 
	I1212 20:11:02.365020  294089 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 20:11:02.365161  294089 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1212 20:11:02.365172  294089 kubeadm.go:319] 
	I1212 20:11:02.365299  294089 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token jarada.w7yro49wda6o4kh5 \
	I1212 20:11:02.365440  294089 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c49fb95e06ac971f3e9f3fbce39707c55b5d19bac31f7f0c0750449d5e9f38c \
	I1212 20:11:02.365474  294089 kubeadm.go:319] 	--control-plane 
	I1212 20:11:02.365483  294089 kubeadm.go:319] 
	I1212 20:11:02.365594  294089 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1212 20:11:02.365608  294089 kubeadm.go:319] 
	I1212 20:11:02.365718  294089 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token jarada.w7yro49wda6o4kh5 \
	I1212 20:11:02.365866  294089 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c49fb95e06ac971f3e9f3fbce39707c55b5d19bac31f7f0c0750449d5e9f38c 
	I1212 20:11:02.368225  294089 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1212 20:11:02.368390  294089 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 20:11:02.368404  294089 cni.go:84] Creating CNI manager for ""
	I1212 20:11:02.368411  294089 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:11:02.370650  294089 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1212 20:11:02.371701  294089 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 20:11:02.375839  294089 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1212 20:11:02.375854  294089 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1212 20:11:02.388408  294089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 20:11:02.602346  294089 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 20:11:02.602411  294089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:11:02.602456  294089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-832562 minikube.k8s.io/updated_at=2025_12_12T20_11_02_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300 minikube.k8s.io/name=newest-cni-832562 minikube.k8s.io/primary=true
	I1212 20:11:02.614641  294089 ops.go:34] apiserver oom_adj: -16
	I1212 20:11:02.689434  294089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:11:03.190507  294089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:11:03.690028  294089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:11:04.190470  294089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:11:04.690449  294089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:11:00.325644  301411 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-789448 --name auto-789448 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-789448 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-789448 --network auto-789448 --ip 192.168.85.2 --volume auto-789448:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138
	I1212 20:11:00.645727  301411 cli_runner.go:164] Run: docker container inspect auto-789448 --format={{.State.Running}}
	I1212 20:11:00.663826  301411 cli_runner.go:164] Run: docker container inspect auto-789448 --format={{.State.Status}}
	I1212 20:11:00.683168  301411 cli_runner.go:164] Run: docker exec auto-789448 stat /var/lib/dpkg/alternatives/iptables
	I1212 20:11:00.728792  301411 oci.go:144] the created container "auto-789448" has a running status.
	I1212 20:11:00.728818  301411 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22112-5703/.minikube/machines/auto-789448/id_rsa...
	I1212 20:11:00.818930  301411 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22112-5703/.minikube/machines/auto-789448/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 20:11:00.848568  301411 cli_runner.go:164] Run: docker container inspect auto-789448 --format={{.State.Status}}
	I1212 20:11:00.867734  301411 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 20:11:00.867758  301411 kic_runner.go:114] Args: [docker exec --privileged auto-789448 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 20:11:00.912370  301411 cli_runner.go:164] Run: docker container inspect auto-789448 --format={{.State.Status}}
	I1212 20:11:00.937123  301411 machine.go:94] provisionDockerMachine start ...
	I1212 20:11:00.937233  301411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-789448
	I1212 20:11:00.966014  301411 main.go:143] libmachine: Using SSH client type: native
	I1212 20:11:00.966424  301411 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33094 <nil> <nil>}
	I1212 20:11:00.966446  301411 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 20:11:00.967706  301411 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33728->127.0.0.1:33094: read: connection reset by peer
	I1212 20:11:04.109528  301411 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-789448
	
	I1212 20:11:04.109555  301411 ubuntu.go:182] provisioning hostname "auto-789448"
	I1212 20:11:04.109633  301411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-789448
	I1212 20:11:04.129742  301411 main.go:143] libmachine: Using SSH client type: native
	I1212 20:11:04.130057  301411 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33094 <nil> <nil>}
	I1212 20:11:04.130082  301411 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-789448 && echo "auto-789448" | sudo tee /etc/hostname
	I1212 20:11:04.287292  301411 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-789448
	
	I1212 20:11:04.287365  301411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-789448
	I1212 20:11:04.311418  301411 main.go:143] libmachine: Using SSH client type: native
	I1212 20:11:04.311726  301411 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33094 <nil> <nil>}
	I1212 20:11:04.311749  301411 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-789448' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-789448/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-789448' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:11:04.450292  301411 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:11:04.450323  301411 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-5703/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-5703/.minikube}
	I1212 20:11:04.450345  301411 ubuntu.go:190] setting up certificates
	I1212 20:11:04.450355  301411 provision.go:84] configureAuth start
	I1212 20:11:04.450431  301411 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-789448
	I1212 20:11:04.467715  301411 provision.go:143] copyHostCerts
	I1212 20:11:04.467783  301411 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-5703/.minikube/ca.pem, removing ...
	I1212 20:11:04.467797  301411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-5703/.minikube/ca.pem
	I1212 20:11:04.467883  301411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-5703/.minikube/ca.pem (1078 bytes)
	I1212 20:11:04.468032  301411 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-5703/.minikube/cert.pem, removing ...
	I1212 20:11:04.468047  301411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-5703/.minikube/cert.pem
	I1212 20:11:04.468091  301411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-5703/.minikube/cert.pem (1123 bytes)
	I1212 20:11:04.468214  301411 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-5703/.minikube/key.pem, removing ...
	I1212 20:11:04.468227  301411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-5703/.minikube/key.pem
	I1212 20:11:04.468268  301411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-5703/.minikube/key.pem (1679 bytes)
	I1212 20:11:04.468395  301411 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-5703/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca-key.pem org=jenkins.auto-789448 san=[127.0.0.1 192.168.85.2 auto-789448 localhost minikube]
	I1212 20:11:04.510896  301411 provision.go:177] copyRemoteCerts
	I1212 20:11:04.510964  301411 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:11:04.511009  301411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-789448
	I1212 20:11:04.529910  301411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/auto-789448/id_rsa Username:docker}
	I1212 20:11:04.627835  301411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 20:11:04.646597  301411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1212 20:11:04.664349  301411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 20:11:04.685943  301411 provision.go:87] duration metric: took 235.564478ms to configureAuth
	I1212 20:11:04.685972  301411 ubuntu.go:206] setting minikube options for container-runtime
	I1212 20:11:04.686139  301411 config.go:182] Loaded profile config "auto-789448": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:11:04.686232  301411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-789448
	I1212 20:11:04.707289  301411 main.go:143] libmachine: Using SSH client type: native
	I1212 20:11:04.707640  301411 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33094 <nil> <nil>}
	I1212 20:11:04.707670  301411 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 20:11:05.020369  301411 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 20:11:05.020397  301411 machine.go:97] duration metric: took 4.08325151s to provisionDockerMachine
	I1212 20:11:05.020413  301411 client.go:176] duration metric: took 9.527737888s to LocalClient.Create
	I1212 20:11:05.020433  301411 start.go:167] duration metric: took 9.527800922s to libmachine.API.Create "auto-789448"
	I1212 20:11:05.020446  301411 start.go:293] postStartSetup for "auto-789448" (driver="docker")
	I1212 20:11:05.020457  301411 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:11:05.020553  301411 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:11:05.020596  301411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-789448
	I1212 20:11:05.040417  301411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/auto-789448/id_rsa Username:docker}
	I1212 20:11:05.136267  301411 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:11:05.139806  301411 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 20:11:05.139829  301411 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 20:11:05.139839  301411 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-5703/.minikube/addons for local assets ...
	I1212 20:11:05.139895  301411 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-5703/.minikube/files for local assets ...
	I1212 20:11:05.139985  301411 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/ssl/certs/92542.pem -> 92542.pem in /etc/ssl/certs
	I1212 20:11:05.140076  301411 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 20:11:05.147439  301411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/ssl/certs/92542.pem --> /etc/ssl/certs/92542.pem (1708 bytes)
	I1212 20:11:05.166823  301411 start.go:296] duration metric: took 146.365973ms for postStartSetup
	I1212 20:11:05.167173  301411 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-789448
	I1212 20:11:05.184629  301411 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/auto-789448/config.json ...
	I1212 20:11:05.184892  301411 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:11:05.184941  301411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-789448
	I1212 20:11:05.202980  301411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/auto-789448/id_rsa Username:docker}
	W1212 20:11:02.512987  289770 node_ready.go:57] node "default-k8s-diff-port-433034" has "Ready":"False" status (will retry)
	W1212 20:11:05.011942  289770 node_ready.go:57] node "default-k8s-diff-port-433034" has "Ready":"False" status (will retry)
	I1212 20:11:05.297976  301411 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 20:11:05.302691  301411 start.go:128] duration metric: took 9.812789428s to createHost
	I1212 20:11:05.302711  301411 start.go:83] releasing machines lock for "auto-789448", held for 9.812942861s
	I1212 20:11:05.302783  301411 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-789448
	I1212 20:11:05.320343  301411 ssh_runner.go:195] Run: cat /version.json
	I1212 20:11:05.320392  301411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-789448
	I1212 20:11:05.320431  301411 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 20:11:05.320550  301411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-789448
	I1212 20:11:05.337759  301411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/auto-789448/id_rsa Username:docker}
	I1212 20:11:05.338803  301411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/auto-789448/id_rsa Username:docker}
	I1212 20:11:05.429817  301411 ssh_runner.go:195] Run: systemctl --version
	I1212 20:11:05.503001  301411 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 20:11:05.538336  301411 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 20:11:05.543011  301411 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:11:05.543068  301411 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:11:05.567502  301411 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 20:11:05.567524  301411 start.go:496] detecting cgroup driver to use...
	I1212 20:11:05.567553  301411 detect.go:190] detected "systemd" cgroup driver on host os
	I1212 20:11:05.567595  301411 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:11:05.583825  301411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:11:05.595645  301411 docker.go:218] disabling cri-docker service (if available) ...
	I1212 20:11:05.595694  301411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 20:11:05.612565  301411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 20:11:05.630070  301411 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 20:11:05.716986  301411 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 20:11:05.818874  301411 docker.go:234] disabling docker service ...
	I1212 20:11:05.818948  301411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 20:11:05.839322  301411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 20:11:05.851614  301411 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 20:11:05.956724  301411 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 20:11:06.058082  301411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:11:06.073106  301411 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:11:06.092634  301411 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 20:11:06.092690  301411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:11:06.111011  301411 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1212 20:11:06.111082  301411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:11:06.120967  301411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:11:06.131025  301411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:11:06.140591  301411 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 20:11:06.150024  301411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:11:06.159487  301411 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:11:06.174457  301411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:11:06.184316  301411 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 20:11:06.192638  301411 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 20:11:06.201371  301411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:11:06.300374  301411 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 20:11:06.463201  301411 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 20:11:06.463258  301411 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 20:11:06.467259  301411 start.go:564] Will wait 60s for crictl version
	I1212 20:11:06.467339  301411 ssh_runner.go:195] Run: which crictl
	I1212 20:11:06.471363  301411 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 20:11:06.498242  301411 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 20:11:06.498396  301411 ssh_runner.go:195] Run: crio --version
	I1212 20:11:06.527723  301411 ssh_runner.go:195] Run: crio --version
	I1212 20:11:06.556352  301411 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 20:11:01.749814  295304 out.go:252]   - Booting up control plane ...
	I1212 20:11:01.749906  295304 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 20:11:01.750022  295304 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 20:11:01.750449  295304 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 20:11:01.764603  295304 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:11:01.764819  295304 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 20:11:01.771527  295304 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 20:11:01.771892  295304 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:11:01.771948  295304 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 20:11:01.875763  295304 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 20:11:01.875916  295304 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 20:11:02.877437  295304 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001822758s
	I1212 20:11:02.881072  295304 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1212 20:11:02.881197  295304 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1212 20:11:02.881336  295304 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1212 20:11:02.881466  295304 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1212 20:11:04.386046  295304 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.504840494s
	I1212 20:11:04.928410  295304 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.047255792s
	I1212 20:11:05.190470  294089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:11:05.690496  294089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:11:06.189981  294089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:11:06.690074  294089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:11:06.761729  294089 kubeadm.go:1114] duration metric: took 4.159367412s to wait for elevateKubeSystemPrivileges
	I1212 20:11:06.761765  294089 kubeadm.go:403] duration metric: took 14.786776817s to StartCluster
	I1212 20:11:06.761798  294089 settings.go:142] acquiring lock: {Name:mk7b47e673a4fe47e1d03b804f21c4b8c19bdcb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:11:06.761872  294089 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 20:11:06.762930  294089 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/kubeconfig: {Name:mkf945a9079d724e4b1deff5cd122f8b2719dc1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:11:06.763165  294089 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 20:11:06.763181  294089 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 20:11:06.763165  294089 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:11:06.763263  294089 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-832562"
	I1212 20:11:06.763338  294089 addons.go:70] Setting default-storageclass=true in profile "newest-cni-832562"
	I1212 20:11:06.763367  294089 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-832562"
	I1212 20:11:06.763461  294089 config.go:182] Loaded profile config "newest-cni-832562": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:11:06.763343  294089 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-832562"
	I1212 20:11:06.763565  294089 host.go:66] Checking if "newest-cni-832562" exists ...
	I1212 20:11:06.763728  294089 cli_runner.go:164] Run: docker container inspect newest-cni-832562 --format={{.State.Status}}
	I1212 20:11:06.764597  294089 cli_runner.go:164] Run: docker container inspect newest-cni-832562 --format={{.State.Status}}
	I1212 20:11:06.769155  294089 out.go:179] * Verifying Kubernetes components...
	I1212 20:11:06.770411  294089 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:11:06.786398  294089 addons.go:239] Setting addon default-storageclass=true in "newest-cni-832562"
	I1212 20:11:06.786443  294089 host.go:66] Checking if "newest-cni-832562" exists ...
	I1212 20:11:06.786884  294089 cli_runner.go:164] Run: docker container inspect newest-cni-832562 --format={{.State.Status}}
	I1212 20:11:06.788653  294089 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:11:06.882927  295304 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001668045s
	I1212 20:11:06.906064  295304 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 20:11:06.922095  295304 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 20:11:06.934431  295304 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 20:11:06.934691  295304 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-399565 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 20:11:06.954313  295304 kubeadm.go:319] [bootstrap-token] Using token: 7qwr5q.jr9rgzsj8byes5mg
	I1212 20:11:06.789711  294089 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:11:06.789730  294089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 20:11:06.789778  294089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:11:06.818831  294089 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 20:11:06.818917  294089 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 20:11:06.819017  294089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:11:06.825177  294089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/newest-cni-832562/id_rsa Username:docker}
	I1212 20:11:06.845379  294089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/newest-cni-832562/id_rsa Username:docker}
	I1212 20:11:06.866668  294089 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 20:11:06.942813  294089 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:11:06.958890  294089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:11:06.982590  294089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:11:07.121318  294089 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1212 20:11:07.121890  294089 api_server.go:52] waiting for apiserver process to appear ...
	I1212 20:11:07.121944  294089 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:11:07.316721  294089 api_server.go:72] duration metric: took 553.441991ms to wait for apiserver process to appear ...
	I1212 20:11:07.316743  294089 api_server.go:88] waiting for apiserver healthz status ...
	I1212 20:11:07.316775  294089 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 20:11:07.322637  294089 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1212 20:11:07.322829  294089 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1212 20:11:07.323479  294089 api_server.go:141] control plane version: v1.35.0-beta.0
	I1212 20:11:07.323503  294089 api_server.go:131] duration metric: took 6.75312ms to wait for apiserver health ...
	I1212 20:11:07.323526  294089 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 20:11:07.324857  294089 addons.go:530] duration metric: took 561.671319ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1212 20:11:07.326169  294089 system_pods.go:59] 8 kube-system pods found
	I1212 20:11:07.326215  294089 system_pods.go:61] "coredns-7d764666f9-4762p" [a53ee562-410c-45be-b679-2660aa1e5684] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1212 20:11:07.326234  294089 system_pods.go:61] "etcd-newest-cni-832562" [49c28736-14cd-4e9c-a3a6-f0fd7b64c184] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 20:11:07.326244  294089 system_pods.go:61] "kindnet-zpw2b" [2340f364-5a1b-4ed7-89bc-3c9347238a44] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1212 20:11:07.326257  294089 system_pods.go:61] "kube-apiserver-newest-cni-832562" [4bafc9d8-689e-4b1d-aa30-d6a7ca78b990] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 20:11:07.326267  294089 system_pods.go:61] "kube-controller-manager-newest-cni-832562" [39096cb8-3644-4518-9f94-ee0bafe5f02a] Running
	I1212 20:11:07.326301  294089 system_pods.go:61] "kube-proxy-x67v5" [62e57f5e-f9e9-4a12-8e87-0f95e2e0879d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 20:11:07.326311  294089 system_pods.go:61] "kube-scheduler-newest-cni-832562" [86b42489-2f0a-46e5-9ebc-e551a2a0aa33] Running
	I1212 20:11:07.326317  294089 system_pods.go:61] "storage-provisioner" [d57bccb6-b89e-405d-ae22-62d444454f02] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1212 20:11:07.326337  294089 system_pods.go:74] duration metric: took 2.801345ms to wait for pod list to return data ...
	I1212 20:11:07.326347  294089 default_sa.go:34] waiting for default service account to be created ...
	I1212 20:11:07.328565  294089 default_sa.go:45] found service account: "default"
	I1212 20:11:07.328590  294089 default_sa.go:55] duration metric: took 2.235998ms for default service account to be created ...
	I1212 20:11:07.328604  294089 kubeadm.go:587] duration metric: took 565.327071ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1212 20:11:07.328623  294089 node_conditions.go:102] verifying NodePressure condition ...
	I1212 20:11:07.331717  294089 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1212 20:11:07.331744  294089 node_conditions.go:123] node cpu capacity is 8
	I1212 20:11:07.331759  294089 node_conditions.go:105] duration metric: took 3.13002ms to run NodePressure ...
	I1212 20:11:07.331773  294089 start.go:242] waiting for startup goroutines ...
	I1212 20:11:07.627376  294089 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-832562" context rescaled to 1 replicas
	I1212 20:11:07.627424  294089 start.go:247] waiting for cluster config update ...
	I1212 20:11:07.627440  294089 start.go:256] writing updated cluster config ...
	I1212 20:11:07.627742  294089 ssh_runner.go:195] Run: rm -f paused
	I1212 20:11:07.688319  294089 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1212 20:11:07.689725  294089 out.go:179] * Done! kubectl is now configured to use "newest-cni-832562" cluster and "default" namespace by default
	I1212 20:11:06.955671  295304 out.go:252]   - Configuring RBAC rules ...
	I1212 20:11:06.955826  295304 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 20:11:06.966346  295304 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 20:11:06.976512  295304 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 20:11:06.982776  295304 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 20:11:06.986331  295304 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 20:11:06.991550  295304 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 20:11:07.289969  295304 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 20:11:07.709500  295304 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1212 20:11:08.289724  295304 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1212 20:11:08.290822  295304 kubeadm.go:319] 
	I1212 20:11:08.290952  295304 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1212 20:11:08.290964  295304 kubeadm.go:319] 
	I1212 20:11:08.291064  295304 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1212 20:11:08.291100  295304 kubeadm.go:319] 
	I1212 20:11:08.291165  295304 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1212 20:11:08.291297  295304 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 20:11:08.291369  295304 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 20:11:08.291378  295304 kubeadm.go:319] 
	I1212 20:11:08.291457  295304 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1212 20:11:08.291468  295304 kubeadm.go:319] 
	I1212 20:11:08.291553  295304 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 20:11:08.291576  295304 kubeadm.go:319] 
	I1212 20:11:08.291649  295304 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1212 20:11:08.291766  295304 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 20:11:08.291878  295304 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 20:11:08.291898  295304 kubeadm.go:319] 
	I1212 20:11:08.291996  295304 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 20:11:08.292086  295304 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1212 20:11:08.292094  295304 kubeadm.go:319] 
	I1212 20:11:08.292190  295304 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 7qwr5q.jr9rgzsj8byes5mg \
	I1212 20:11:08.292316  295304 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c49fb95e06ac971f3e9f3fbce39707c55b5d19bac31f7f0c0750449d5e9f38c \
	I1212 20:11:08.292343  295304 kubeadm.go:319] 	--control-plane 
	I1212 20:11:08.292351  295304 kubeadm.go:319] 
	I1212 20:11:08.292443  295304 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1212 20:11:08.292452  295304 kubeadm.go:319] 
	I1212 20:11:08.292550  295304 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 7qwr5q.jr9rgzsj8byes5mg \
	I1212 20:11:08.292666  295304 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c49fb95e06ac971f3e9f3fbce39707c55b5d19bac31f7f0c0750449d5e9f38c 
	I1212 20:11:08.295259  295304 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1212 20:11:08.295422  295304 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 20:11:08.295449  295304 cni.go:84] Creating CNI manager for ""
	I1212 20:11:08.295464  295304 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:11:08.297205  295304 out.go:179] * Configuring CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Dec 12 20:11:07 newest-cni-832562 crio[768]: time="2025-12-12T20:11:07.38616437Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:11:07 newest-cni-832562 crio[768]: time="2025-12-12T20:11:07.386341768Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=f9a461f6-eb12-4c39-9b6f-01900e3c008b name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 20:11:07 newest-cni-832562 crio[768]: time="2025-12-12T20:11:07.388450947Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 12 20:11:07 newest-cni-832562 crio[768]: time="2025-12-12T20:11:07.38935161Z" level=info msg="Ran pod sandbox f39acc7e8e8c4f1414e5f0eed31f215e4b714c6bf27e0cd0533c41f26b937adc with infra container: kube-system/kube-proxy-x67v5/POD" id=f9a461f6-eb12-4c39-9b6f-01900e3c008b name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 20:11:07 newest-cni-832562 crio[768]: time="2025-12-12T20:11:07.389355652Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=9255e06e-7050-4686-99dc-1a1e3a616006 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 20:11:07 newest-cni-832562 crio[768]: time="2025-12-12T20:11:07.390608115Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=a99f084d-6689-47b9-b356-2d806d104821 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:11:07 newest-cni-832562 crio[768]: time="2025-12-12T20:11:07.390921635Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 12 20:11:07 newest-cni-832562 crio[768]: time="2025-12-12T20:11:07.391707403Z" level=info msg="Ran pod sandbox 9cbe944e7f226fc8baa30ad612f6fad99443243d11e328935681a5e5a5870a5e with infra container: kube-system/kindnet-zpw2b/POD" id=9255e06e-7050-4686-99dc-1a1e3a616006 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 20:11:07 newest-cni-832562 crio[768]: time="2025-12-12T20:11:07.391846302Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=cbf14c48-099f-432a-8eeb-3bb3d6bd8811 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:11:07 newest-cni-832562 crio[768]: time="2025-12-12T20:11:07.393172857Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=bfc58e1d-ce17-4d48-baef-33bdcea3ee2b name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:11:07 newest-cni-832562 crio[768]: time="2025-12-12T20:11:07.394320715Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=47c94daf-d4f8-491f-8aec-084750be0923 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:11:07 newest-cni-832562 crio[768]: time="2025-12-12T20:11:07.395509864Z" level=info msg="Creating container: kube-system/kube-proxy-x67v5/kube-proxy" id=91a7df10-e552-4ee2-9d8d-85e1e320096e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:11:07 newest-cni-832562 crio[768]: time="2025-12-12T20:11:07.395641506Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:11:07 newest-cni-832562 crio[768]: time="2025-12-12T20:11:07.397172824Z" level=info msg="Creating container: kube-system/kindnet-zpw2b/kindnet-cni" id=59ef724b-7ba3-4e52-b1ca-95a8bc65ae75 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:11:07 newest-cni-832562 crio[768]: time="2025-12-12T20:11:07.397258254Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:11:07 newest-cni-832562 crio[768]: time="2025-12-12T20:11:07.400980755Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:11:07 newest-cni-832562 crio[768]: time="2025-12-12T20:11:07.401405211Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:11:07 newest-cni-832562 crio[768]: time="2025-12-12T20:11:07.40160888Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:11:07 newest-cni-832562 crio[768]: time="2025-12-12T20:11:07.402082155Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:11:07 newest-cni-832562 crio[768]: time="2025-12-12T20:11:07.436857199Z" level=info msg="Created container 4b1b13c818553c9b55ee48a1d13f5e5a618ae4ccb73c6a25768777e1a9cd8b0c: kube-system/kindnet-zpw2b/kindnet-cni" id=59ef724b-7ba3-4e52-b1ca-95a8bc65ae75 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:11:07 newest-cni-832562 crio[768]: time="2025-12-12T20:11:07.437477047Z" level=info msg="Starting container: 4b1b13c818553c9b55ee48a1d13f5e5a618ae4ccb73c6a25768777e1a9cd8b0c" id=860deed6-b85d-4e1e-932c-646eaac88008 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 20:11:07 newest-cni-832562 crio[768]: time="2025-12-12T20:11:07.43915992Z" level=info msg="Started container" PID=1590 containerID=4b1b13c818553c9b55ee48a1d13f5e5a618ae4ccb73c6a25768777e1a9cd8b0c description=kube-system/kindnet-zpw2b/kindnet-cni id=860deed6-b85d-4e1e-932c-646eaac88008 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9cbe944e7f226fc8baa30ad612f6fad99443243d11e328935681a5e5a5870a5e
	Dec 12 20:11:07 newest-cni-832562 crio[768]: time="2025-12-12T20:11:07.440962846Z" level=info msg="Created container 9203b4a94ccba691ca4be45453821763cb432a9877a943c69db146f4d1ce4b2c: kube-system/kube-proxy-x67v5/kube-proxy" id=91a7df10-e552-4ee2-9d8d-85e1e320096e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:11:07 newest-cni-832562 crio[768]: time="2025-12-12T20:11:07.441581283Z" level=info msg="Starting container: 9203b4a94ccba691ca4be45453821763cb432a9877a943c69db146f4d1ce4b2c" id=a4a61921-a4c8-4e0f-9451-916cc0400373 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 20:11:07 newest-cni-832562 crio[768]: time="2025-12-12T20:11:07.444911118Z" level=info msg="Started container" PID=1591 containerID=9203b4a94ccba691ca4be45453821763cb432a9877a943c69db146f4d1ce4b2c description=kube-system/kube-proxy-x67v5/kube-proxy id=a4a61921-a4c8-4e0f-9451-916cc0400373 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f39acc7e8e8c4f1414e5f0eed31f215e4b714c6bf27e0cd0533c41f26b937adc
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	4b1b13c818553       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   1 second ago        Running             kindnet-cni               0                   9cbe944e7f226       kindnet-zpw2b                               kube-system
	9203b4a94ccba       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   1 second ago        Running             kube-proxy                0                   f39acc7e8e8c4       kube-proxy-x67v5                            kube-system
	0bb6bedbb2067       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   13 seconds ago      Running             kube-apiserver            0                   04cf4708e1705       kube-apiserver-newest-cni-832562            kube-system
	656187f255af1       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   13 seconds ago      Running             etcd                      0                   6d9ec29db61b1       etcd-newest-cni-832562                      kube-system
	d43d9cfbd2043       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   13 seconds ago      Running             kube-scheduler            0                   c46a5f6a1f388       kube-scheduler-newest-cni-832562            kube-system
	3ca608152e696       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   13 seconds ago      Running             kube-controller-manager   0                   c9f393db40c8f       kube-controller-manager-newest-cni-832562   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-832562
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-832562
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300
	                    minikube.k8s.io/name=newest-cni-832562
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T20_11_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 20:10:57 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-832562
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 20:11:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 20:11:01 +0000   Fri, 12 Dec 2025 20:10:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 20:11:01 +0000   Fri, 12 Dec 2025 20:10:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 20:11:01 +0000   Fri, 12 Dec 2025 20:10:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 12 Dec 2025 20:11:01 +0000   Fri, 12 Dec 2025 20:10:55 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-832562
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c9831a2813dcaeb3cc17b596693b7bac
	  System UUID:                02e0f34f-a5d1-439b-8544-2451e32971bb
	  Boot ID:                    50e046d9-33b0-4107-918f-f0a8d9513b10
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-832562                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10s
	  kube-system                 kindnet-zpw2b                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-832562             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-832562    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-x67v5                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-832562             100m (1%)     0 (0%)      0 (0%)           0 (0%)         11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  4s    node-controller  Node newest-cni-832562 event: Registered Node newest-cni-832562 in Controller
	
	
	==> dmesg <==
	[  +0.086327] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026088] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.855140] kauditd_printk_skb: 47 callbacks suppressed
	[Dec12 19:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.016395] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023890] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023861] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023912] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023894] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +2.047754] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +4.031589] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +8.448131] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[Dec12 19:32] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[ +32.252714] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	
	
	==> etcd [656187f255af10b1d8bff7d69f2ca8815e90697a394a181b146d6310fe07acf9] <==
	{"level":"warn","ts":"2025-12-12T20:10:56.462303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:56.468869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:56.476860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:56.483523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:56.491947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:56.500446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:56.509069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:56.517246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:56.536031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:56.545338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:56.553728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:56.562452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:56.614841Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33318","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-12T20:10:58.476021Z","caller":"traceutil/trace.go:172","msg":"trace[972618393] transaction","detail":"{read_only:false; response_revision:105; number_of_response:1; }","duration":"101.503015ms","start":"2025-12-12T20:10:58.374469Z","end":"2025-12-12T20:10:58.475972Z","steps":["trace[972618393] 'process raft request'  (duration: 58.114829ms)","trace[972618393] 'compare'  (duration: 43.197728ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-12T20:10:58.627510Z","caller":"traceutil/trace.go:172","msg":"trace[425601064] transaction","detail":"{read_only:false; response_revision:107; number_of_response:1; }","duration":"103.536175ms","start":"2025-12-12T20:10:58.523949Z","end":"2025-12-12T20:10:58.627485Z","steps":["trace[425601064] 'process raft request'  (duration: 98.909646ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-12T20:10:58.925233Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"197.374869ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638357267099266381 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/system:controller:expand-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:controller:expand-controller\" value_size:720 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-12-12T20:10:58.925459Z","caller":"traceutil/trace.go:172","msg":"trace[786974274] transaction","detail":"{read_only:false; response_revision:111; number_of_response:1; }","duration":"173.000245ms","start":"2025-12-12T20:10:58.752445Z","end":"2025-12-12T20:10:58.925445Z","steps":["trace[786974274] 'process raft request'  (duration: 172.923488ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T20:10:58.925559Z","caller":"traceutil/trace.go:172","msg":"trace[954044592] transaction","detail":"{read_only:false; response_revision:110; number_of_response:1; }","duration":"235.954205ms","start":"2025-12-12T20:10:58.689566Z","end":"2025-12-12T20:10:58.925520Z","steps":["trace[954044592] 'process raft request'  (duration: 37.833118ms)","trace[954044592] 'compare'  (duration: 197.260837ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-12T20:10:59.125298Z","caller":"traceutil/trace.go:172","msg":"trace[1372486602] transaction","detail":"{read_only:false; response_revision:112; number_of_response:1; }","duration":"194.001156ms","start":"2025-12-12T20:10:58.931251Z","end":"2025-12-12T20:10:59.125252Z","steps":["trace[1372486602] 'process raft request'  (duration: 109.180342ms)","trace[1372486602] 'compare'  (duration: 84.691726ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-12T20:10:59.244293Z","caller":"traceutil/trace.go:172","msg":"trace[132952759] transaction","detail":"{read_only:false; response_revision:113; number_of_response:1; }","duration":"113.068123ms","start":"2025-12-12T20:10:59.131181Z","end":"2025-12-12T20:10:59.244249Z","steps":["trace[132952759] 'process raft request'  (duration: 110.841613ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T20:10:59.473036Z","caller":"traceutil/trace.go:172","msg":"trace[362724321] transaction","detail":"{read_only:false; response_revision:117; number_of_response:1; }","duration":"151.910014ms","start":"2025-12-12T20:10:59.321106Z","end":"2025-12-12T20:10:59.473016Z","steps":["trace[362724321] 'process raft request'  (duration: 130.890047ms)","trace[362724321] 'compare'  (duration: 20.905742ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-12T20:10:59.613900Z","caller":"traceutil/trace.go:172","msg":"trace[691865081] transaction","detail":"{read_only:false; response_revision:118; number_of_response:1; }","duration":"135.122283ms","start":"2025-12-12T20:10:59.478758Z","end":"2025-12-12T20:10:59.613881Z","steps":["trace[691865081] 'process raft request'  (duration: 124.017557ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-12T20:10:59.904645Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"137.194216ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638357267099266402 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/system:controller:pod-garbage-collector\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:controller:pod-garbage-collector\" value_size:587 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-12-12T20:10:59.904732Z","caller":"traceutil/trace.go:172","msg":"trace[755505900] transaction","detail":"{read_only:false; response_revision:120; number_of_response:1; }","duration":"265.505363ms","start":"2025-12-12T20:10:59.639214Z","end":"2025-12-12T20:10:59.904719Z","steps":["trace[755505900] 'process raft request'  (duration: 128.184603ms)","trace[755505900] 'compare'  (duration: 137.079797ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-12T20:11:00.243486Z","caller":"traceutil/trace.go:172","msg":"trace[815452242] transaction","detail":"{read_only:false; response_revision:130; number_of_response:1; }","duration":"128.970396ms","start":"2025-12-12T20:11:00.114495Z","end":"2025-12-12T20:11:00.243466Z","steps":["trace[815452242] 'process raft request'  (duration: 121.917809ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:11:09 up 53 min,  0 user,  load average: 3.60, 2.20, 1.57
	Linux newest-cni-832562 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4b1b13c818553c9b55ee48a1d13f5e5a618ae4ccb73c6a25768777e1a9cd8b0c] <==
	I1212 20:11:07.721209       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1212 20:11:07.721874       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1212 20:11:07.722019       1 main.go:148] setting mtu 1500 for CNI 
	I1212 20:11:07.722068       1 main.go:178] kindnetd IP family: "ipv4"
	I1212 20:11:07.722100       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-12T20:11:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1212 20:11:07.923222       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1212 20:11:08.019136       1 controller.go:381] "Waiting for informer caches to sync"
	I1212 20:11:08.019175       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1212 20:11:08.019394       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1212 20:11:08.419358       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1212 20:11:08.419416       1 metrics.go:72] Registering metrics
	I1212 20:11:08.419501       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [0bb6bedbb20679f54588692802425b809c00c8dbd15a7a58d5f5b79292d61d87] <==
	I1212 20:10:57.183587       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 20:10:57.183722       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1212 20:10:57.187526       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1212 20:10:57.187681       1 aggregator.go:187] initial CRD sync complete...
	I1212 20:10:57.187699       1 autoregister_controller.go:144] Starting autoregister controller
	I1212 20:10:57.187707       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 20:10:57.187714       1 cache.go:39] Caches are synced for autoregister controller
	I1212 20:10:57.188854       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 20:10:58.080779       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1212 20:10:58.088225       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1212 20:10:58.088481       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1212 20:11:00.485365       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 20:11:00.522895       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 20:11:00.587306       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1212 20:11:00.594437       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1212 20:11:00.595697       1 controller.go:667] quota admission added evaluator for: endpoints
	I1212 20:11:00.600113       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 20:11:01.101156       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1212 20:11:01.767820       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1212 20:11:01.778976       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1212 20:11:01.787930       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1212 20:11:07.018850       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1212 20:11:07.057208       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1212 20:11:07.108142       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 20:11:07.114407       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [3ca608152e69636a61ee4c294ec435b4bd9a013c2abee1923320bba667ee9598] <==
	I1212 20:11:05.907480       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:05.907461       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:05.907453       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:05.908814       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:05.908861       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:05.907486       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:05.907407       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:05.907786       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:05.907798       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:05.908887       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:05.908342       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:05.908826       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:05.907430       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:05.908835       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:05.919359       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1212 20:11:05.919930       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-832562"
	I1212 20:11:05.920012       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1212 20:11:05.920089       1 range_allocator.go:433] "Set node PodCIDR" node="newest-cni-832562" podCIDRs=["10.42.0.0/24"]
	I1212 20:11:05.919750       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:05.919742       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:05.928104       1 shared_informer.go:370] "Waiting for caches to sync"
	I1212 20:11:06.010189       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:06.010210       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1212 20:11:06.010217       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1212 20:11:06.028538       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [9203b4a94ccba691ca4be45453821763cb432a9877a943c69db146f4d1ce4b2c] <==
	I1212 20:11:07.483973       1 server_linux.go:53] "Using iptables proxy"
	I1212 20:11:07.556789       1 shared_informer.go:370] "Waiting for caches to sync"
	I1212 20:11:07.657220       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:07.657261       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1212 20:11:07.657387       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 20:11:07.682875       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 20:11:07.682973       1 server_linux.go:136] "Using iptables Proxier"
	I1212 20:11:07.689640       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 20:11:07.690063       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1212 20:11:07.690090       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 20:11:07.691648       1 config.go:200] "Starting service config controller"
	I1212 20:11:07.691676       1 config.go:106] "Starting endpoint slice config controller"
	I1212 20:11:07.691682       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 20:11:07.691719       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 20:11:07.691855       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 20:11:07.691875       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 20:11:07.691899       1 config.go:309] "Starting node config controller"
	I1212 20:11:07.691908       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 20:11:07.691915       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 20:11:07.791913       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1212 20:11:07.792014       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1212 20:11:07.793563       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d43d9cfbd204343e240310174debc755881bc2525301132f70c048d32c10c987] <==
	E1212 20:10:58.357406       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope"
	E1212 20:10:58.358748       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1212 20:10:58.375493       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope"
	E1212 20:10:58.376956       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1212 20:10:58.454434       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope"
	E1212 20:10:58.455429       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1212 20:10:58.467700       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope"
	E1212 20:10:58.468655       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1212 20:10:58.512043       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope"
	E1212 20:10:58.513160       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1212 20:10:58.543012       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope"
	E1212 20:10:58.543941       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1212 20:10:58.577385       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1212 20:10:58.578386       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1212 20:10:58.607478       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope"
	E1212 20:10:58.608448       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1212 20:10:58.736316       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope"
	E1212 20:10:58.737316       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1212 20:10:59.673781       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope"
	E1212 20:10:59.674846       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1212 20:11:00.185879       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope"
	E1212 20:11:00.186857       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1212 20:11:00.283377       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope"
	E1212 20:11:00.286000       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	I1212 20:11:00.638774       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 12 20:11:02 newest-cni-832562 kubelet[1308]: E1212 20:11:02.614241    1308 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-832562" containerName="kube-apiserver"
	Dec 12 20:11:02 newest-cni-832562 kubelet[1308]: I1212 20:11:02.663468    1308 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-832562" podStartSLOduration=4.663448796 podStartE2EDuration="4.663448796s" podCreationTimestamp="2025-12-12 20:10:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 20:11:02.655847158 +0000 UTC m=+1.152432688" watchObservedRunningTime="2025-12-12 20:11:02.663448796 +0000 UTC m=+1.160034326"
	Dec 12 20:11:02 newest-cni-832562 kubelet[1308]: I1212 20:11:02.671424    1308 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-832562" podStartSLOduration=3.6714084639999998 podStartE2EDuration="3.671408464s" podCreationTimestamp="2025-12-12 20:10:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 20:11:02.663406865 +0000 UTC m=+1.159992394" watchObservedRunningTime="2025-12-12 20:11:02.671408464 +0000 UTC m=+1.167993994"
	Dec 12 20:11:02 newest-cni-832562 kubelet[1308]: I1212 20:11:02.683929    1308 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-832562" podStartSLOduration=1.683911332 podStartE2EDuration="1.683911332s" podCreationTimestamp="2025-12-12 20:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 20:11:02.671844572 +0000 UTC m=+1.168430102" watchObservedRunningTime="2025-12-12 20:11:02.683911332 +0000 UTC m=+1.180496862"
	Dec 12 20:11:02 newest-cni-832562 kubelet[1308]: I1212 20:11:02.693614    1308 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-832562" podStartSLOduration=1.6935979799999998 podStartE2EDuration="1.69359798s" podCreationTimestamp="2025-12-12 20:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 20:11:02.684082519 +0000 UTC m=+1.180668047" watchObservedRunningTime="2025-12-12 20:11:02.69359798 +0000 UTC m=+1.190183509"
	Dec 12 20:11:03 newest-cni-832562 kubelet[1308]: E1212 20:11:03.607031    1308 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-832562" containerName="kube-scheduler"
	Dec 12 20:11:03 newest-cni-832562 kubelet[1308]: E1212 20:11:03.607131    1308 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-832562" containerName="etcd"
	Dec 12 20:11:03 newest-cni-832562 kubelet[1308]: E1212 20:11:03.607376    1308 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-832562" containerName="kube-apiserver"
	Dec 12 20:11:03 newest-cni-832562 kubelet[1308]: E1212 20:11:03.607595    1308 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-832562" containerName="kube-controller-manager"
	Dec 12 20:11:04 newest-cni-832562 kubelet[1308]: E1212 20:11:04.608578    1308 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-832562" containerName="etcd"
	Dec 12 20:11:04 newest-cni-832562 kubelet[1308]: E1212 20:11:04.608702    1308 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-832562" containerName="kube-apiserver"
	Dec 12 20:11:06 newest-cni-832562 kubelet[1308]: I1212 20:11:06.008446    1308 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 12 20:11:06 newest-cni-832562 kubelet[1308]: I1212 20:11:06.009252    1308 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 12 20:11:06 newest-cni-832562 kubelet[1308]: E1212 20:11:06.391312    1308 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-832562" containerName="kube-apiserver"
	Dec 12 20:11:06 newest-cni-832562 kubelet[1308]: E1212 20:11:06.612102    1308 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-832562" containerName="kube-scheduler"
	Dec 12 20:11:07 newest-cni-832562 kubelet[1308]: I1212 20:11:07.114893    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/62e57f5e-f9e9-4a12-8e87-0f95e2e0879d-kube-proxy\") pod \"kube-proxy-x67v5\" (UID: \"62e57f5e-f9e9-4a12-8e87-0f95e2e0879d\") " pod="kube-system/kube-proxy-x67v5"
	Dec 12 20:11:07 newest-cni-832562 kubelet[1308]: I1212 20:11:07.115350    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/62e57f5e-f9e9-4a12-8e87-0f95e2e0879d-xtables-lock\") pod \"kube-proxy-x67v5\" (UID: \"62e57f5e-f9e9-4a12-8e87-0f95e2e0879d\") " pod="kube-system/kube-proxy-x67v5"
	Dec 12 20:11:07 newest-cni-832562 kubelet[1308]: I1212 20:11:07.115379    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62e57f5e-f9e9-4a12-8e87-0f95e2e0879d-lib-modules\") pod \"kube-proxy-x67v5\" (UID: \"62e57f5e-f9e9-4a12-8e87-0f95e2e0879d\") " pod="kube-system/kube-proxy-x67v5"
	Dec 12 20:11:07 newest-cni-832562 kubelet[1308]: I1212 20:11:07.115399    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/2340f364-5a1b-4ed7-89bc-3c9347238a44-cni-cfg\") pod \"kindnet-zpw2b\" (UID: \"2340f364-5a1b-4ed7-89bc-3c9347238a44\") " pod="kube-system/kindnet-zpw2b"
	Dec 12 20:11:07 newest-cni-832562 kubelet[1308]: I1212 20:11:07.115423    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2340f364-5a1b-4ed7-89bc-3c9347238a44-lib-modules\") pod \"kindnet-zpw2b\" (UID: \"2340f364-5a1b-4ed7-89bc-3c9347238a44\") " pod="kube-system/kindnet-zpw2b"
	Dec 12 20:11:07 newest-cni-832562 kubelet[1308]: I1212 20:11:07.115539    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r6t7\" (UniqueName: \"kubernetes.io/projected/62e57f5e-f9e9-4a12-8e87-0f95e2e0879d-kube-api-access-9r6t7\") pod \"kube-proxy-x67v5\" (UID: \"62e57f5e-f9e9-4a12-8e87-0f95e2e0879d\") " pod="kube-system/kube-proxy-x67v5"
	Dec 12 20:11:07 newest-cni-832562 kubelet[1308]: I1212 20:11:07.115599    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2340f364-5a1b-4ed7-89bc-3c9347238a44-xtables-lock\") pod \"kindnet-zpw2b\" (UID: \"2340f364-5a1b-4ed7-89bc-3c9347238a44\") " pod="kube-system/kindnet-zpw2b"
	Dec 12 20:11:07 newest-cni-832562 kubelet[1308]: I1212 20:11:07.115627    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk842\" (UniqueName: \"kubernetes.io/projected/2340f364-5a1b-4ed7-89bc-3c9347238a44-kube-api-access-dk842\") pod \"kindnet-zpw2b\" (UID: \"2340f364-5a1b-4ed7-89bc-3c9347238a44\") " pod="kube-system/kindnet-zpw2b"
	Dec 12 20:11:07 newest-cni-832562 kubelet[1308]: I1212 20:11:07.628051    1308 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-zpw2b" podStartSLOduration=0.628031745 podStartE2EDuration="628.031745ms" podCreationTimestamp="2025-12-12 20:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 20:11:07.627803842 +0000 UTC m=+6.124389380" watchObservedRunningTime="2025-12-12 20:11:07.628031745 +0000 UTC m=+6.124617276"
	Dec 12 20:11:07 newest-cni-832562 kubelet[1308]: I1212 20:11:07.637596    1308 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-x67v5" podStartSLOduration=0.637581566 podStartE2EDuration="637.581566ms" podCreationTimestamp="2025-12-12 20:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 20:11:07.637391455 +0000 UTC m=+6.133976985" watchObservedRunningTime="2025-12-12 20:11:07.637581566 +0000 UTC m=+6.134167096"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-832562 -n newest-cni-832562
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-832562 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-4762p storage-provisioner
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-832562 describe pod coredns-7d764666f9-4762p storage-provisioner
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-832562 describe pod coredns-7d764666f9-4762p storage-provisioner: exit status 1 (61.419852ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-4762p" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-832562 describe pod coredns-7d764666f9-4762p storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-433034 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-433034 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (273.931466ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:11:21Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-433034 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-433034 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-433034 describe deploy/metrics-server -n kube-system: exit status 1 (74.39624ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-433034 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-433034
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-433034:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fd3264bb0f47e10f5ed2d4066292d5de07bb0c2a43611792ae5335ebb6ca06a7",
	        "Created": "2025-12-12T20:10:35.289904623Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 291505,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T20:10:35.332805691Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ca69fb46d667873e297ff8975852d6be818eb624529e750e2b09ff0d3b0c367",
	        "ResolvConfPath": "/var/lib/docker/containers/fd3264bb0f47e10f5ed2d4066292d5de07bb0c2a43611792ae5335ebb6ca06a7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fd3264bb0f47e10f5ed2d4066292d5de07bb0c2a43611792ae5335ebb6ca06a7/hostname",
	        "HostsPath": "/var/lib/docker/containers/fd3264bb0f47e10f5ed2d4066292d5de07bb0c2a43611792ae5335ebb6ca06a7/hosts",
	        "LogPath": "/var/lib/docker/containers/fd3264bb0f47e10f5ed2d4066292d5de07bb0c2a43611792ae5335ebb6ca06a7/fd3264bb0f47e10f5ed2d4066292d5de07bb0c2a43611792ae5335ebb6ca06a7-json.log",
	        "Name": "/default-k8s-diff-port-433034",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-433034:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-433034",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fd3264bb0f47e10f5ed2d4066292d5de07bb0c2a43611792ae5335ebb6ca06a7",
	                "LowerDir": "/var/lib/docker/overlay2/16fd782b0a201b5189823b9a6925e35312bdc767755b365cfae5b065abc49f14-init/diff:/var/lib/docker/overlay2/87d8f1d6be832394c20ec55eb084b3f4a084ca786856584bd9e90cdb45432d3c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/16fd782b0a201b5189823b9a6925e35312bdc767755b365cfae5b065abc49f14/merged",
	                "UpperDir": "/var/lib/docker/overlay2/16fd782b0a201b5189823b9a6925e35312bdc767755b365cfae5b065abc49f14/diff",
	                "WorkDir": "/var/lib/docker/overlay2/16fd782b0a201b5189823b9a6925e35312bdc767755b365cfae5b065abc49f14/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-433034",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-433034/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-433034",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-433034",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-433034",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "baaa45497a999458ec580d4ace522f1d4b37e90eafa66b99e19b90f1a24026a8",
	            "SandboxKey": "/var/run/docker/netns/baaa45497a99",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-433034": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9682428112d69f44e5ab9b8a0895f7f7dfc5a7aa9a7423b8acd6944687003e6d",
	                    "EndpointID": "8f0f971dcac184d7bb4ff2fc7fe2da9d3e4fa3659e2894b6b6229141adebbf1d",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "16:16:a5:3c:55:7e",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-433034",
	                        "fd3264bb0f47"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-433034 -n default-k8s-diff-port-433034
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-433034 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-433034 logs -n 25: (1.044499492s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ start   │ -p old-k8s-version-824670 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:10 UTC │
	│ addons  │ enable dashboard -p no-preload-753103 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:09 UTC │
	│ start   │ -p no-preload-753103 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:10 UTC │
	│ delete  │ -p stopped-upgrade-180826                                                                                                                                                                                                                            │ stopped-upgrade-180826       │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ start   │ -p kubernetes-upgrade-991615 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                    │ kubernetes-upgrade-991615    │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ start   │ -p kubernetes-upgrade-991615 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-991615    │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ start   │ -p default-k8s-diff-port-433034 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-433034 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:11 UTC │
	│ image   │ old-k8s-version-824670 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ pause   │ -p old-k8s-version-824670 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-991615                                                                                                                                                                                                                         │ kubernetes-upgrade-991615    │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ delete  │ -p old-k8s-version-824670                                                                                                                                                                                                                            │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ start   │ -p newest-cni-832562 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-832562            │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:11 UTC │
	│ delete  │ -p old-k8s-version-824670                                                                                                                                                                                                                            │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ delete  │ -p disable-driver-mounts-044739                                                                                                                                                                                                                      │ disable-driver-mounts-044739 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ start   │ -p embed-certs-399565 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-399565           │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ image   │ no-preload-753103 image list --format=json                                                                                                                                                                                                           │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ pause   │ -p no-preload-753103 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ delete  │ -p no-preload-753103                                                                                                                                                                                                                                 │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ delete  │ -p no-preload-753103                                                                                                                                                                                                                                 │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ start   │ -p auto-789448 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                              │ auto-789448                  │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-832562 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-832562            │ jenkins │ v1.37.0 │ 12 Dec 25 20:11 UTC │                     │
	│ stop    │ -p newest-cni-832562 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-832562            │ jenkins │ v1.37.0 │ 12 Dec 25 20:11 UTC │ 12 Dec 25 20:11 UTC │
	│ addons  │ enable dashboard -p newest-cni-832562 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-832562            │ jenkins │ v1.37.0 │ 12 Dec 25 20:11 UTC │ 12 Dec 25 20:11 UTC │
	│ start   │ -p newest-cni-832562 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-832562            │ jenkins │ v1.37.0 │ 12 Dec 25 20:11 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-433034 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-433034 │ jenkins │ v1.37.0 │ 12 Dec 25 20:11 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:11:12
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:11:12.532737  306436 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:11:12.532986  306436 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:11:12.532995  306436 out.go:374] Setting ErrFile to fd 2...
	I1212 20:11:12.532999  306436 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:11:12.533167  306436 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 20:11:12.533560  306436 out.go:368] Setting JSON to false
	I1212 20:11:12.534675  306436 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3219,"bootTime":1765567053,"procs":306,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 20:11:12.534737  306436 start.go:143] virtualization: kvm guest
	I1212 20:11:12.536616  306436 out.go:179] * [newest-cni-832562] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 20:11:12.537669  306436 notify.go:221] Checking for updates...
	I1212 20:11:12.537685  306436 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:11:12.538838  306436 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:11:12.540254  306436 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 20:11:12.541433  306436 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-5703/.minikube
	I1212 20:11:12.542571  306436 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 20:11:12.543568  306436 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:11:12.544902  306436 config.go:182] Loaded profile config "newest-cni-832562": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:11:12.545459  306436 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:11:12.570695  306436 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1212 20:11:12.570780  306436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:11:12.629539  306436 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-12 20:11:12.618700808 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 20:11:12.629686  306436 docker.go:319] overlay module found
	I1212 20:11:12.631198  306436 out.go:179] * Using the docker driver based on existing profile
	I1212 20:11:11.426717  289770 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-433034" is "Ready"
	I1212 20:11:11.426743  289770 pod_ready.go:86] duration metric: took 384.436254ms for pod "kube-controller-manager-default-k8s-diff-port-433034" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:11:11.625409  289770 pod_ready.go:83] waiting for pod "kube-proxy-tmrrg" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:11:12.026631  289770 pod_ready.go:94] pod "kube-proxy-tmrrg" is "Ready"
	I1212 20:11:12.026656  289770 pod_ready.go:86] duration metric: took 401.222833ms for pod "kube-proxy-tmrrg" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:11:12.227624  289770 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-433034" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:11:12.626733  289770 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-433034" is "Ready"
	I1212 20:11:12.626762  289770 pod_ready.go:86] duration metric: took 399.116059ms for pod "kube-scheduler-default-k8s-diff-port-433034" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:11:12.626778  289770 pod_ready.go:40] duration metric: took 1.604405948s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 20:11:12.686012  289770 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1212 20:11:12.687473  289770 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-433034" cluster and "default" namespace by default
	I1212 20:11:12.632248  306436 start.go:309] selected driver: docker
	I1212 20:11:12.632261  306436 start.go:927] validating driver "docker" against &{Name:newest-cni-832562 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-832562 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:11:12.632407  306436 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:11:12.633116  306436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:11:12.702610  306436 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-12 20:11:12.690486315 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 20:11:12.702958  306436 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1212 20:11:12.702986  306436 cni.go:84] Creating CNI manager for ""
	I1212 20:11:12.703053  306436 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:11:12.703091  306436 start.go:353] cluster config:
	{Name:newest-cni-832562 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-832562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:11:12.704612  306436 out.go:179] * Starting "newest-cni-832562" primary control-plane node in "newest-cni-832562" cluster
	I1212 20:11:12.705863  306436 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 20:11:12.709546  306436 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 20:11:12.710732  306436 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 20:11:12.710861  306436 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1212 20:11:12.710872  306436 cache.go:65] Caching tarball of preloaded images
	I1212 20:11:12.710930  306436 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 20:11:12.711254  306436 preload.go:238] Found /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 20:11:12.711713  306436 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1212 20:11:12.711874  306436 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/newest-cni-832562/config.json ...
	I1212 20:11:12.740476  306436 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 20:11:12.740499  306436 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 20:11:12.740515  306436 cache.go:243] Successfully downloaded all kic artifacts
	I1212 20:11:12.740550  306436 start.go:360] acquireMachinesLock for newest-cni-832562: {Name:mk09681eb0bd95476952ca6616e7bf9ebfe66f0b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:11:12.740607  306436 start.go:364] duration metric: took 36.955µs to acquireMachinesLock for "newest-cni-832562"
	I1212 20:11:12.740626  306436 start.go:96] Skipping create...Using existing machine configuration
	I1212 20:11:12.740633  306436 fix.go:54] fixHost starting: 
	I1212 20:11:12.740922  306436 cli_runner.go:164] Run: docker container inspect newest-cni-832562 --format={{.State.Status}}
	I1212 20:11:12.762201  306436 fix.go:112] recreateIfNeeded on newest-cni-832562: state=Stopped err=<nil>
	W1212 20:11:12.762227  306436 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 20:11:12.160118  295304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:11:12.659749  295304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:11:13.162390  295304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:11:13.247549  295304 kubeadm.go:1114] duration metric: took 4.684925554s to wait for elevateKubeSystemPrivileges
	I1212 20:11:13.247586  295304 kubeadm.go:403] duration metric: took 17.129842196s to StartCluster
	I1212 20:11:13.247609  295304 settings.go:142] acquiring lock: {Name:mk7b47e673a4fe47e1d03b804f21c4b8c19bdcb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:11:13.247674  295304 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 20:11:13.249680  295304 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/kubeconfig: {Name:mkf945a9079d724e4b1deff5cd122f8b2719dc1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:11:13.250021  295304 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 20:11:13.250039  295304 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:11:13.250112  295304 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 20:11:13.250202  295304 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-399565"
	I1212 20:11:13.250219  295304 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-399565"
	I1212 20:11:13.250230  295304 addons.go:70] Setting default-storageclass=true in profile "embed-certs-399565"
	I1212 20:11:13.250240  295304 config.go:182] Loaded profile config "embed-certs-399565": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:11:13.250249  295304 host.go:66] Checking if "embed-certs-399565" exists ...
	I1212 20:11:13.250293  295304 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-399565"
	I1212 20:11:13.250642  295304 cli_runner.go:164] Run: docker container inspect embed-certs-399565 --format={{.State.Status}}
	I1212 20:11:13.251092  295304 cli_runner.go:164] Run: docker container inspect embed-certs-399565 --format={{.State.Status}}
	I1212 20:11:13.251535  295304 out.go:179] * Verifying Kubernetes components...
	I1212 20:11:13.252738  295304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:11:13.278937  295304 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:11:13.280251  295304 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:11:13.280335  295304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 20:11:13.280437  295304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-399565
	I1212 20:11:13.282471  295304 addons.go:239] Setting addon default-storageclass=true in "embed-certs-399565"
	I1212 20:11:13.282550  295304 host.go:66] Checking if "embed-certs-399565" exists ...
	I1212 20:11:13.283069  295304 cli_runner.go:164] Run: docker container inspect embed-certs-399565 --format={{.State.Status}}
	I1212 20:11:13.310936  295304 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/embed-certs-399565/id_rsa Username:docker}
	I1212 20:11:13.314744  295304 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 20:11:13.314765  295304 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 20:11:13.314906  295304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-399565
	I1212 20:11:13.337251  295304 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/embed-certs-399565/id_rsa Username:docker}
	I1212 20:11:13.353114  295304 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 20:11:13.417391  295304 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:11:13.420847  295304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:11:13.448465  295304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:11:13.533431  295304 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1212 20:11:13.534253  295304 node_ready.go:35] waiting up to 6m0s for node "embed-certs-399565" to be "Ready" ...
	I1212 20:11:13.742444  295304 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1212 20:11:13.743388  295304 addons.go:530] duration metric: took 493.280619ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1212 20:11:14.039742  295304 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-399565" context rescaled to 1 replicas
	W1212 20:11:15.537067  295304 node_ready.go:57] node "embed-certs-399565" has "Ready":"False" status (will retry)
	I1212 20:11:12.763744  306436 out.go:252] * Restarting existing docker container for "newest-cni-832562" ...
	I1212 20:11:12.763824  306436 cli_runner.go:164] Run: docker start newest-cni-832562
	I1212 20:11:13.023832  306436 cli_runner.go:164] Run: docker container inspect newest-cni-832562 --format={{.State.Status}}
	I1212 20:11:13.046680  306436 kic.go:430] container "newest-cni-832562" state is running.
	I1212 20:11:13.047113  306436 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-832562
	I1212 20:11:13.068770  306436 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/newest-cni-832562/config.json ...
	I1212 20:11:13.069032  306436 machine.go:94] provisionDockerMachine start ...
	I1212 20:11:13.069098  306436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:11:13.089330  306436 main.go:143] libmachine: Using SSH client type: native
	I1212 20:11:13.089573  306436 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1212 20:11:13.089588  306436 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 20:11:13.090218  306436 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49200->127.0.0.1:33099: read: connection reset by peer
	I1212 20:11:16.222759  306436 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-832562
	
	I1212 20:11:16.222785  306436 ubuntu.go:182] provisioning hostname "newest-cni-832562"
	I1212 20:11:16.222834  306436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:11:16.241438  306436 main.go:143] libmachine: Using SSH client type: native
	I1212 20:11:16.241751  306436 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1212 20:11:16.241768  306436 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-832562 && echo "newest-cni-832562" | sudo tee /etc/hostname
	I1212 20:11:16.380807  306436 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-832562
	
	I1212 20:11:16.380888  306436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:11:16.398960  306436 main.go:143] libmachine: Using SSH client type: native
	I1212 20:11:16.399163  306436 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1212 20:11:16.399179  306436 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-832562' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-832562/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-832562' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:11:16.530634  306436 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:11:16.530659  306436 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-5703/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-5703/.minikube}
	I1212 20:11:16.530681  306436 ubuntu.go:190] setting up certificates
	I1212 20:11:16.530691  306436 provision.go:84] configureAuth start
	I1212 20:11:16.530749  306436 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-832562
	I1212 20:11:16.548912  306436 provision.go:143] copyHostCerts
	I1212 20:11:16.548982  306436 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-5703/.minikube/cert.pem, removing ...
	I1212 20:11:16.548998  306436 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-5703/.minikube/cert.pem
	I1212 20:11:16.549073  306436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-5703/.minikube/cert.pem (1123 bytes)
	I1212 20:11:16.549266  306436 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-5703/.minikube/key.pem, removing ...
	I1212 20:11:16.549294  306436 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-5703/.minikube/key.pem
	I1212 20:11:16.549341  306436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-5703/.minikube/key.pem (1679 bytes)
	I1212 20:11:16.549441  306436 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-5703/.minikube/ca.pem, removing ...
	I1212 20:11:16.549451  306436 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-5703/.minikube/ca.pem
	I1212 20:11:16.549488  306436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-5703/.minikube/ca.pem (1078 bytes)
	I1212 20:11:16.549559  306436 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-5703/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca-key.pem org=jenkins.newest-cni-832562 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-832562]
	I1212 20:11:16.636954  306436 provision.go:177] copyRemoteCerts
	I1212 20:11:16.637013  306436 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:11:16.637053  306436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:11:16.655185  306436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/newest-cni-832562/id_rsa Username:docker}
	I1212 20:11:16.749983  306436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 20:11:16.766255  306436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 20:11:16.782372  306436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 20:11:16.798676  306436 provision.go:87] duration metric: took 267.965188ms to configureAuth
	I1212 20:11:16.798702  306436 ubuntu.go:206] setting minikube options for container-runtime
	I1212 20:11:16.798853  306436 config.go:182] Loaded profile config "newest-cni-832562": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:11:16.798944  306436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:11:16.816825  306436 main.go:143] libmachine: Using SSH client type: native
	I1212 20:11:16.817017  306436 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1212 20:11:16.817034  306436 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 20:11:17.128705  306436 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 20:11:17.128730  306436 machine.go:97] duration metric: took 4.059681977s to provisionDockerMachine
	I1212 20:11:17.128745  306436 start.go:293] postStartSetup for "newest-cni-832562" (driver="docker")
	I1212 20:11:17.128761  306436 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:11:17.128838  306436 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:11:17.128884  306436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:11:17.149194  306436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/newest-cni-832562/id_rsa Username:docker}
	I1212 20:11:17.250736  306436 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:11:17.254854  306436 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 20:11:17.254886  306436 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 20:11:17.254899  306436 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-5703/.minikube/addons for local assets ...
	I1212 20:11:17.254950  306436 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-5703/.minikube/files for local assets ...
	I1212 20:11:17.255040  306436 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/ssl/certs/92542.pem -> 92542.pem in /etc/ssl/certs
	I1212 20:11:17.255125  306436 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 20:11:17.264376  306436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/ssl/certs/92542.pem --> /etc/ssl/certs/92542.pem (1708 bytes)
	I1212 20:11:17.284781  306436 start.go:296] duration metric: took 156.020863ms for postStartSetup
	I1212 20:11:17.284867  306436 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:11:17.284913  306436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:11:17.305853  306436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/newest-cni-832562/id_rsa Username:docker}
	I1212 20:11:17.402518  306436 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 20:11:17.406823  306436 fix.go:56] duration metric: took 4.666186138s for fixHost
	I1212 20:11:17.406851  306436 start.go:83] releasing machines lock for "newest-cni-832562", held for 4.666234992s
	I1212 20:11:17.406917  306436 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-832562
	I1212 20:11:17.424782  306436 ssh_runner.go:195] Run: cat /version.json
	I1212 20:11:17.424802  306436 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 20:11:17.424837  306436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:11:17.424858  306436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:11:17.442825  306436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/newest-cni-832562/id_rsa Username:docker}
	I1212 20:11:17.443862  306436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/newest-cni-832562/id_rsa Username:docker}
	I1212 20:11:17.534872  306436 ssh_runner.go:195] Run: systemctl --version
	I1212 20:11:17.600181  306436 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 20:11:17.641146  306436 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 20:11:17.646092  306436 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:11:17.646167  306436 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:11:17.654310  306436 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 20:11:17.654332  306436 start.go:496] detecting cgroup driver to use...
	I1212 20:11:17.654363  306436 detect.go:190] detected "systemd" cgroup driver on host os
	I1212 20:11:17.654404  306436 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:11:17.669500  306436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:11:17.681081  306436 docker.go:218] disabling cri-docker service (if available) ...
	I1212 20:11:17.681134  306436 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 20:11:17.694386  306436 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 20:11:17.705620  306436 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 20:11:17.784550  306436 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 20:11:17.861606  306436 docker.go:234] disabling docker service ...
	I1212 20:11:17.861656  306436 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 20:11:17.875336  306436 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 20:11:17.888438  306436 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 20:11:17.971427  306436 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 20:11:18.073018  306436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:11:18.084838  306436 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:11:18.098527  306436 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 20:11:18.098580  306436 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:11:18.107046  306436 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1212 20:11:18.107111  306436 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:11:18.116104  306436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:11:18.124558  306436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:11:18.132638  306436 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 20:11:18.140230  306436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:11:18.149072  306436 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:11:18.158072  306436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:11:18.168037  306436 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 20:11:18.176007  306436 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 20:11:18.183229  306436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:11:18.288050  306436 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 20:11:18.427103  306436 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 20:11:18.427179  306436 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 20:11:18.431178  306436 start.go:564] Will wait 60s for crictl version
	I1212 20:11:18.431236  306436 ssh_runner.go:195] Run: which crictl
	I1212 20:11:18.434958  306436 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 20:11:18.459474  306436 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 20:11:18.459547  306436 ssh_runner.go:195] Run: crio --version
	I1212 20:11:18.486435  306436 ssh_runner.go:195] Run: crio --version
	I1212 20:11:18.514372  306436 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1212 20:11:18.515327  306436 cli_runner.go:164] Run: docker network inspect newest-cni-832562 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:11:18.531943  306436 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1212 20:11:18.536350  306436 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:11:18.548096  306436 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1212 20:11:18.991510  301411 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1212 20:11:18.991612  301411 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 20:11:18.991704  301411 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 20:11:18.991752  301411 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1212 20:11:18.991819  301411 kubeadm.go:319] OS: Linux
	I1212 20:11:18.991896  301411 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 20:11:18.991940  301411 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 20:11:18.991989  301411 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 20:11:18.992047  301411 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 20:11:18.992141  301411 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 20:11:18.992226  301411 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 20:11:18.992354  301411 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 20:11:18.992466  301411 kubeadm.go:319] CGROUPS_IO: enabled
	I1212 20:11:18.992570  301411 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 20:11:18.992682  301411 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 20:11:18.992765  301411 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 20:11:18.992819  301411 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 20:11:18.994628  301411 out.go:252]   - Generating certificates and keys ...
	I1212 20:11:18.994711  301411 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 20:11:18.994809  301411 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 20:11:18.994900  301411 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 20:11:18.994976  301411 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 20:11:18.995071  301411 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 20:11:18.995158  301411 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 20:11:18.995244  301411 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 20:11:18.995445  301411 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-789448 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1212 20:11:18.995531  301411 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 20:11:18.995672  301411 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-789448 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1212 20:11:18.995783  301411 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 20:11:18.995852  301411 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 20:11:18.995893  301411 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 20:11:18.995963  301411 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 20:11:18.996022  301411 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 20:11:18.996090  301411 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 20:11:18.996165  301411 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 20:11:18.996286  301411 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 20:11:18.996370  301411 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 20:11:18.996501  301411 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 20:11:18.996557  301411 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 20:11:18.997850  301411 out.go:252]   - Booting up control plane ...
	I1212 20:11:18.997970  301411 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 20:11:18.998091  301411 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 20:11:18.998188  301411 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 20:11:18.998364  301411 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:11:18.998473  301411 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 20:11:18.998564  301411 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 20:11:18.998691  301411 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:11:18.998761  301411 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 20:11:18.998930  301411 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 20:11:18.999095  301411 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 20:11:18.999181  301411 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.001160152s
	I1212 20:11:18.999321  301411 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1212 20:11:18.999437  301411 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1212 20:11:18.999573  301411 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1212 20:11:18.999679  301411 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1212 20:11:18.999786  301411 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.861418367s
	I1212 20:11:18.999870  301411 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.421043224s
	I1212 20:11:18.999967  301411 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501427281s
	I1212 20:11:19.000092  301411 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 20:11:19.000238  301411 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 20:11:19.000296  301411 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 20:11:19.000475  301411 kubeadm.go:319] [mark-control-plane] Marking the node auto-789448 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 20:11:19.000557  301411 kubeadm.go:319] [bootstrap-token] Using token: 37si91.mktn1vtsbp7n8vf2
	I1212 20:11:19.001847  301411 out.go:252]   - Configuring RBAC rules ...
	I1212 20:11:19.001969  301411 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 20:11:19.002045  301411 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 20:11:19.002169  301411 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 20:11:19.002361  301411 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 20:11:19.002516  301411 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 20:11:19.002620  301411 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 20:11:19.002758  301411 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 20:11:19.002838  301411 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1212 20:11:19.002907  301411 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1212 20:11:19.002924  301411 kubeadm.go:319] 
	I1212 20:11:19.003018  301411 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1212 20:11:19.003027  301411 kubeadm.go:319] 
	I1212 20:11:19.003143  301411 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1212 20:11:19.003156  301411 kubeadm.go:319] 
	I1212 20:11:19.003198  301411 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1212 20:11:19.003296  301411 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 20:11:19.003377  301411 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 20:11:19.003393  301411 kubeadm.go:319] 
	I1212 20:11:19.003453  301411 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1212 20:11:19.003463  301411 kubeadm.go:319] 
	I1212 20:11:19.003508  301411 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 20:11:19.003514  301411 kubeadm.go:319] 
	I1212 20:11:19.003573  301411 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1212 20:11:19.003682  301411 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 20:11:19.003798  301411 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 20:11:19.003807  301411 kubeadm.go:319] 
	I1212 20:11:19.003932  301411 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 20:11:19.004037  301411 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1212 20:11:19.004060  301411 kubeadm.go:319] 
	I1212 20:11:19.004167  301411 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 37si91.mktn1vtsbp7n8vf2 \
	I1212 20:11:19.004303  301411 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c49fb95e06ac971f3e9f3fbce39707c55b5d19bac31f7f0c0750449d5e9f38c \
	I1212 20:11:19.004334  301411 kubeadm.go:319] 	--control-plane 
	I1212 20:11:19.004343  301411 kubeadm.go:319] 
	I1212 20:11:19.004443  301411 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1212 20:11:19.004454  301411 kubeadm.go:319] 
	I1212 20:11:19.004561  301411 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 37si91.mktn1vtsbp7n8vf2 \
	I1212 20:11:19.004687  301411 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c49fb95e06ac971f3e9f3fbce39707c55b5d19bac31f7f0c0750449d5e9f38c 
	I1212 20:11:19.004700  301411 cni.go:84] Creating CNI manager for ""
	I1212 20:11:19.004709  301411 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:11:19.006102  301411 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1212 20:11:18.549351  306436 kubeadm.go:884] updating cluster {Name:newest-cni-832562 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-832562 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 20:11:18.549491  306436 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 20:11:18.549552  306436 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:11:18.581474  306436 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:11:18.581492  306436 crio.go:433] Images already preloaded, skipping extraction
	I1212 20:11:18.581529  306436 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:11:18.606848  306436 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:11:18.606866  306436 cache_images.go:86] Images are preloaded, skipping loading
	I1212 20:11:18.606879  306436 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1212 20:11:18.606969  306436 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-832562 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-832562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 20:11:18.607028  306436 ssh_runner.go:195] Run: crio config
	I1212 20:11:18.650568  306436 cni.go:84] Creating CNI manager for ""
	I1212 20:11:18.650585  306436 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:11:18.650597  306436 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1212 20:11:18.650621  306436 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-832562 NodeName:newest-cni-832562 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:11:18.650797  306436 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-832562"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:11:18.650875  306436 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 20:11:18.659204  306436 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 20:11:18.659264  306436 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 20:11:18.666538  306436 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1212 20:11:18.678253  306436 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 20:11:18.690427  306436 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1212 20:11:18.702522  306436 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1212 20:11:18.705872  306436 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:11:18.715341  306436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:11:18.799933  306436 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:11:18.822260  306436 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/newest-cni-832562 for IP: 192.168.76.2
	I1212 20:11:18.822291  306436 certs.go:195] generating shared ca certs ...
	I1212 20:11:18.822312  306436 certs.go:227] acquiring lock for ca certs: {Name:mk2b2b547b64ae18c808f2dedd3d3cb8fa4a59ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:11:18.822472  306436 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-5703/.minikube/ca.key
	I1212 20:11:18.822539  306436 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.key
	I1212 20:11:18.822556  306436 certs.go:257] generating profile certs ...
	I1212 20:11:18.822665  306436 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/newest-cni-832562/client.key
	I1212 20:11:18.822742  306436 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/newest-cni-832562/apiserver.key.a4f7d03e
	I1212 20:11:18.822794  306436 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/newest-cni-832562/proxy-client.key
	I1212 20:11:18.822940  306436 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/9254.pem (1338 bytes)
	W1212 20:11:18.822988  306436 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-5703/.minikube/certs/9254_empty.pem, impossibly tiny 0 bytes
	I1212 20:11:18.823003  306436 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 20:11:18.823040  306436 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem (1078 bytes)
	I1212 20:11:18.823080  306436 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:11:18.823116  306436 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/key.pem (1679 bytes)
	I1212 20:11:18.823178  306436 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/ssl/certs/92542.pem (1708 bytes)
	I1212 20:11:18.823724  306436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:11:18.841416  306436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 20:11:18.861938  306436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:11:18.880588  306436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 20:11:18.904203  306436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/newest-cni-832562/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 20:11:18.923257  306436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/newest-cni-832562/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 20:11:18.940506  306436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/newest-cni-832562/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:11:18.956851  306436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/newest-cni-832562/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 20:11:18.973739  306436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/ssl/certs/92542.pem --> /usr/share/ca-certificates/92542.pem (1708 bytes)
	I1212 20:11:18.991233  306436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:11:19.009149  306436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/certs/9254.pem --> /usr/share/ca-certificates/9254.pem (1338 bytes)
	I1212 20:11:19.027209  306436 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 20:11:19.039983  306436 ssh_runner.go:195] Run: openssl version
	I1212 20:11:19.046698  306436 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/92542.pem
	I1212 20:11:19.054113  306436 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/92542.pem /etc/ssl/certs/92542.pem
	I1212 20:11:19.062666  306436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92542.pem
	I1212 20:11:19.066186  306436 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:38 /usr/share/ca-certificates/92542.pem
	I1212 20:11:19.066233  306436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92542.pem
	I1212 20:11:19.105711  306436 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 20:11:19.114638  306436 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:11:19.123679  306436 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 20:11:19.131354  306436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:11:19.135466  306436 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:11:19.135523  306436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:11:19.173657  306436 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 20:11:19.182212  306436 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9254.pem
	I1212 20:11:19.190700  306436 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9254.pem /etc/ssl/certs/9254.pem
	I1212 20:11:19.198624  306436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9254.pem
	I1212 20:11:19.202780  306436 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:38 /usr/share/ca-certificates/9254.pem
	I1212 20:11:19.202838  306436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9254.pem
	I1212 20:11:19.246502  306436 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 20:11:19.255168  306436 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:11:19.259717  306436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 20:11:19.313539  306436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 20:11:19.371225  306436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 20:11:19.422739  306436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 20:11:19.470384  306436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 20:11:19.532059  306436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 20:11:19.588061  306436 kubeadm.go:401] StartCluster: {Name:newest-cni-832562 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-832562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:11:19.588158  306436 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:11:19.588214  306436 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:11:19.625669  306436 cri.go:89] found id: "302da1b0b4b49d8184afdd2afaccda38c21edb87a4612a1dc37701a62340f511"
	I1212 20:11:19.625691  306436 cri.go:89] found id: "f0a7c03f08d77407822e1d8f041f02ceb34d3703a2fae8bc8ce0492d7f51f8d1"
	I1212 20:11:19.625696  306436 cri.go:89] found id: "41418d6b64580bd178a2682078ca82622588d0949f2b8a780d7e198c24ad245f"
	I1212 20:11:19.625701  306436 cri.go:89] found id: "cf33221a5bf2511a5c4dcc0fef48a4b8caf2e2b4b846415a5686cd3646cae564"
	I1212 20:11:19.625705  306436 cri.go:89] found id: ""
	I1212 20:11:19.625749  306436 ssh_runner.go:195] Run: sudo runc list -f json
	W1212 20:11:19.638803  306436 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:11:19Z" level=error msg="open /run/runc: no such file or directory"
	I1212 20:11:19.638873  306436 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 20:11:19.647053  306436 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 20:11:19.647070  306436 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 20:11:19.647111  306436 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 20:11:19.654948  306436 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:11:19.655771  306436 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-832562" does not appear in /home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 20:11:19.656483  306436 kubeconfig.go:62] /home/jenkins/minikube-integration/22112-5703/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-832562" cluster setting kubeconfig missing "newest-cni-832562" context setting]
	I1212 20:11:19.657615  306436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/kubeconfig: {Name:mkf945a9079d724e4b1deff5cd122f8b2719dc1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:11:19.659393  306436 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 20:11:19.667192  306436 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1212 20:11:19.667217  306436 kubeadm.go:602] duration metric: took 20.141054ms to restartPrimaryControlPlane
	I1212 20:11:19.667226  306436 kubeadm.go:403] duration metric: took 79.176832ms to StartCluster
	I1212 20:11:19.667240  306436 settings.go:142] acquiring lock: {Name:mk7b47e673a4fe47e1d03b804f21c4b8c19bdcb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:11:19.667307  306436 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 20:11:19.669327  306436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/kubeconfig: {Name:mkf945a9079d724e4b1deff5cd122f8b2719dc1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:11:19.669545  306436 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:11:19.669627  306436 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 20:11:19.669735  306436 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-832562"
	I1212 20:11:19.669753  306436 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-832562"
	W1212 20:11:19.669764  306436 addons.go:248] addon storage-provisioner should already be in state true
	I1212 20:11:19.669770  306436 addons.go:70] Setting dashboard=true in profile "newest-cni-832562"
	I1212 20:11:19.669794  306436 addons.go:239] Setting addon dashboard=true in "newest-cni-832562"
	I1212 20:11:19.669803  306436 host.go:66] Checking if "newest-cni-832562" exists ...
	W1212 20:11:19.669804  306436 addons.go:248] addon dashboard should already be in state true
	I1212 20:11:19.669821  306436 addons.go:70] Setting default-storageclass=true in profile "newest-cni-832562"
	I1212 20:11:19.669845  306436 host.go:66] Checking if "newest-cni-832562" exists ...
	I1212 20:11:19.669855  306436 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-832562"
	I1212 20:11:19.670004  306436 config.go:182] Loaded profile config "newest-cni-832562": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:11:19.670151  306436 cli_runner.go:164] Run: docker container inspect newest-cni-832562 --format={{.State.Status}}
	I1212 20:11:19.670372  306436 cli_runner.go:164] Run: docker container inspect newest-cni-832562 --format={{.State.Status}}
	I1212 20:11:19.670393  306436 cli_runner.go:164] Run: docker container inspect newest-cni-832562 --format={{.State.Status}}
	I1212 20:11:19.671836  306436 out.go:179] * Verifying Kubernetes components...
	I1212 20:11:19.673143  306436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:11:19.696493  306436 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:11:19.696549  306436 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1212 20:11:19.698073  306436 addons.go:239] Setting addon default-storageclass=true in "newest-cni-832562"
	W1212 20:11:19.698091  306436 addons.go:248] addon default-storageclass should already be in state true
	I1212 20:11:19.698117  306436 host.go:66] Checking if "newest-cni-832562" exists ...
	I1212 20:11:19.698299  306436 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:11:19.698320  306436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 20:11:19.698389  306436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:11:19.698714  306436 cli_runner.go:164] Run: docker container inspect newest-cni-832562 --format={{.State.Status}}
	I1212 20:11:19.699611  306436 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1212 20:11:19.007171  301411 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 20:11:19.012044  301411 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1212 20:11:19.012058  301411 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1212 20:11:19.025492  301411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 20:11:19.238269  301411 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 20:11:19.238406  301411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:11:19.238445  301411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-789448 minikube.k8s.io/updated_at=2025_12_12T20_11_19_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300 minikube.k8s.io/name=auto-789448 minikube.k8s.io/primary=true
	I1212 20:11:19.248815  301411 ops.go:34] apiserver oom_adj: -16
	I1212 20:11:19.342028  301411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:11:19.842118  301411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:11:19.700607  306436 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1212 20:11:19.700623  306436 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1212 20:11:19.700681  306436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:11:19.733819  306436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/newest-cni-832562/id_rsa Username:docker}
	I1212 20:11:19.737686  306436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/newest-cni-832562/id_rsa Username:docker}
	I1212 20:11:19.738213  306436 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 20:11:19.738230  306436 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 20:11:19.738825  306436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:11:19.763235  306436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/newest-cni-832562/id_rsa Username:docker}
	I1212 20:11:19.814806  306436 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:11:19.827954  306436 api_server.go:52] waiting for apiserver process to appear ...
	I1212 20:11:19.828021  306436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:11:19.839160  306436 api_server.go:72] duration metric: took 169.583655ms to wait for apiserver process to appear ...
	I1212 20:11:19.839192  306436 api_server.go:88] waiting for apiserver healthz status ...
	I1212 20:11:19.839213  306436 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 20:11:19.851668  306436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:11:19.852695  306436 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1212 20:11:19.852713  306436 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1212 20:11:19.866443  306436 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1212 20:11:19.866463  306436 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1212 20:11:19.872697  306436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:11:19.879944  306436 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1212 20:11:19.879960  306436 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1212 20:11:19.895031  306436 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1212 20:11:19.895047  306436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1212 20:11:19.913465  306436 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1212 20:11:19.913492  306436 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1212 20:11:19.934394  306436 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1212 20:11:19.934434  306436 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1212 20:11:19.948772  306436 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1212 20:11:19.948799  306436 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1212 20:11:19.964030  306436 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1212 20:11:19.964051  306436 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1212 20:11:19.977064  306436 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 20:11:19.977085  306436 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1212 20:11:19.994045  306436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 20:11:20.863984  306436 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 20:11:20.864012  306436 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 20:11:20.864028  306436 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 20:11:20.873597  306436 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 20:11:20.873626  306436 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 20:11:21.339755  306436 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 20:11:21.345549  306436 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 20:11:21.345582  306436 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 20:11:21.486618  306436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.613891926s)
	I1212 20:11:21.486815  306436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.635117346s)
	I1212 20:11:21.486881  306436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.492798287s)
	I1212 20:11:21.488852  306436 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-832562 addons enable metrics-server
	
	I1212 20:11:21.499486  306436 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1212 20:11:17.537795  295304 node_ready.go:57] node "embed-certs-399565" has "Ready":"False" status (will retry)
	W1212 20:11:19.538262  295304 node_ready.go:57] node "embed-certs-399565" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Dec 12 20:11:10 default-k8s-diff-port-433034 crio[778]: time="2025-12-12T20:11:10.346662227Z" level=info msg="Started container" PID=1819 containerID=a3ee9795d6f18fdef1f1fd87e5762106efc920a5c3b4066f4292f418a55b4fae description=kube-system/storage-provisioner/storage-provisioner id=780bc537-4d49-4355-ad31-c9828d8f0b5f name=/runtime.v1.RuntimeService/StartContainer sandboxID=0de9e0e2f8692da474239d3c67e51c62e39d079ec47bce9be6c31a962f1607b7
	Dec 12 20:11:10 default-k8s-diff-port-433034 crio[778]: time="2025-12-12T20:11:10.348016526Z" level=info msg="Started container" PID=1820 containerID=3e98461a874f8d9dedde68839e1b8bf732e7729ee805e88037709ad54cb0c3bf description=kube-system/coredns-66bc5c9577-8wnb6/coredns id=edb5dba0-2914-4b7b-892f-cc9ccc7d9def name=/runtime.v1.RuntimeService/StartContainer sandboxID=3a37d6c0e6ebb4efb6832b73673af4840b05a345cc7e55c405504a13f74bf53e
	Dec 12 20:11:13 default-k8s-diff-port-433034 crio[778]: time="2025-12-12T20:11:13.194513036Z" level=info msg="Running pod sandbox: default/busybox/POD" id=a174f44b-57f0-4a1c-83e0-1ce0c8df750c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 20:11:13 default-k8s-diff-port-433034 crio[778]: time="2025-12-12T20:11:13.194599185Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:11:13 default-k8s-diff-port-433034 crio[778]: time="2025-12-12T20:11:13.199988246Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:44dce45271df9e3debd75c3f0e3992ea87bf72893d3ac50a99cf4c676d9351e7 UID:4c0c6390-93fc-431e-ab56-29f5ec5d45ba NetNS:/var/run/netns/66872dd0-1a7d-4b99-b82c-afdc9d9cd22a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0003b8428}] Aliases:map[]}"
	Dec 12 20:11:13 default-k8s-diff-port-433034 crio[778]: time="2025-12-12T20:11:13.200023505Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 12 20:11:13 default-k8s-diff-port-433034 crio[778]: time="2025-12-12T20:11:13.210252139Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:44dce45271df9e3debd75c3f0e3992ea87bf72893d3ac50a99cf4c676d9351e7 UID:4c0c6390-93fc-431e-ab56-29f5ec5d45ba NetNS:/var/run/netns/66872dd0-1a7d-4b99-b82c-afdc9d9cd22a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0003b8428}] Aliases:map[]}"
	Dec 12 20:11:13 default-k8s-diff-port-433034 crio[778]: time="2025-12-12T20:11:13.210444122Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 12 20:11:13 default-k8s-diff-port-433034 crio[778]: time="2025-12-12T20:11:13.211365691Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 12 20:11:13 default-k8s-diff-port-433034 crio[778]: time="2025-12-12T20:11:13.212787727Z" level=info msg="Ran pod sandbox 44dce45271df9e3debd75c3f0e3992ea87bf72893d3ac50a99cf4c676d9351e7 with infra container: default/busybox/POD" id=a174f44b-57f0-4a1c-83e0-1ce0c8df750c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 20:11:13 default-k8s-diff-port-433034 crio[778]: time="2025-12-12T20:11:13.214973172Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=072987a0-042d-4bab-8701-ffee2c5cbd3b name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:11:13 default-k8s-diff-port-433034 crio[778]: time="2025-12-12T20:11:13.215486175Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=072987a0-042d-4bab-8701-ffee2c5cbd3b name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:11:13 default-k8s-diff-port-433034 crio[778]: time="2025-12-12T20:11:13.215536772Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=072987a0-042d-4bab-8701-ffee2c5cbd3b name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:11:13 default-k8s-diff-port-433034 crio[778]: time="2025-12-12T20:11:13.216501424Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=92d9f8b3-8003-48c5-ae61-e0399116a619 name=/runtime.v1.ImageService/PullImage
	Dec 12 20:11:13 default-k8s-diff-port-433034 crio[778]: time="2025-12-12T20:11:13.219414724Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 12 20:11:13 default-k8s-diff-port-433034 crio[778]: time="2025-12-12T20:11:13.915926551Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=92d9f8b3-8003-48c5-ae61-e0399116a619 name=/runtime.v1.ImageService/PullImage
	Dec 12 20:11:13 default-k8s-diff-port-433034 crio[778]: time="2025-12-12T20:11:13.916460897Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f475dfee-627e-47b2-8aef-c9883fcbc125 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:11:13 default-k8s-diff-port-433034 crio[778]: time="2025-12-12T20:11:13.917625002Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0fc05aa4-9b56-426c-969b-bc2af1508ef3 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:11:13 default-k8s-diff-port-433034 crio[778]: time="2025-12-12T20:11:13.920709242Z" level=info msg="Creating container: default/busybox/busybox" id=c03259ff-057b-4a3b-a2a0-4864de93bc7a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:11:13 default-k8s-diff-port-433034 crio[778]: time="2025-12-12T20:11:13.920831091Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:11:13 default-k8s-diff-port-433034 crio[778]: time="2025-12-12T20:11:13.925061353Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:11:13 default-k8s-diff-port-433034 crio[778]: time="2025-12-12T20:11:13.925618311Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:11:13 default-k8s-diff-port-433034 crio[778]: time="2025-12-12T20:11:13.954201587Z" level=info msg="Created container 2355d4cb8b494da5e87c7ed7f84a8fbc96a3f372113eb79cf7dd456ce8a2176b: default/busybox/busybox" id=c03259ff-057b-4a3b-a2a0-4864de93bc7a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:11:13 default-k8s-diff-port-433034 crio[778]: time="2025-12-12T20:11:13.954692686Z" level=info msg="Starting container: 2355d4cb8b494da5e87c7ed7f84a8fbc96a3f372113eb79cf7dd456ce8a2176b" id=c90bcb70-82cd-49c5-9fa5-22be03504275 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 20:11:13 default-k8s-diff-port-433034 crio[778]: time="2025-12-12T20:11:13.956205954Z" level=info msg="Started container" PID=1897 containerID=2355d4cb8b494da5e87c7ed7f84a8fbc96a3f372113eb79cf7dd456ce8a2176b description=default/busybox/busybox id=c90bcb70-82cd-49c5-9fa5-22be03504275 name=/runtime.v1.RuntimeService/StartContainer sandboxID=44dce45271df9e3debd75c3f0e3992ea87bf72893d3ac50a99cf4c676d9351e7
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	2355d4cb8b494       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   44dce45271df9       busybox                                                default
	3e98461a874f8       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   3a37d6c0e6ebb       coredns-66bc5c9577-8wnb6                               kube-system
	a3ee9795d6f18       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   0de9e0e2f8692       storage-provisioner                                    kube-system
	4510e609f3f50       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   bdea882cff281       kindnet-w6vcl                                          kube-system
	ad7ce18c19722       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                      23 seconds ago      Running             kube-proxy                0                   60e23e7dfd643       kube-proxy-tmrrg                                       kube-system
	468221615cfd5       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                      34 seconds ago      Running             kube-apiserver            0                   fbc974f467aa9       kube-apiserver-default-k8s-diff-port-433034            kube-system
	d233fb4a83f6f       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      34 seconds ago      Running             etcd                      0                   3c8f6e4a7e557       etcd-default-k8s-diff-port-433034                      kube-system
	97c3e60d941d2       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                      34 seconds ago      Running             kube-scheduler            0                   08e5d5069d1ff       kube-scheduler-default-k8s-diff-port-433034            kube-system
	3ffb1a36be417       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                      34 seconds ago      Running             kube-controller-manager   0                   4b5dc0a4a8f5e       kube-controller-manager-default-k8s-diff-port-433034   kube-system
	
	
	==> coredns [3e98461a874f8d9dedde68839e1b8bf732e7729ee805e88037709ad54cb0c3bf] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40150 - 28178 "HINFO IN 3038125116348118846.3246055880029816767. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.104774535s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-433034
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-433034
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300
	                    minikube.k8s.io/name=default-k8s-diff-port-433034
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T20_10_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 20:10:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-433034
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 20:11:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 20:11:09 +0000   Fri, 12 Dec 2025 20:10:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 20:11:09 +0000   Fri, 12 Dec 2025 20:10:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 20:11:09 +0000   Fri, 12 Dec 2025 20:10:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 20:11:09 +0000   Fri, 12 Dec 2025 20:11:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-433034
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c9831a2813dcaeb3cc17b596693b7bac
	  System UUID:                50f00333-6091-4f07-9dbc-f9936dd93205
	  Boot ID:                    50e046d9-33b0-4107-918f-f0a8d9513b10
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-8wnb6                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-default-k8s-diff-port-433034                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-w6vcl                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-default-k8s-diff-port-433034             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-433034    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-tmrrg                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-default-k8s-diff-port-433034             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 30s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node default-k8s-diff-port-433034 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node default-k8s-diff-port-433034 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node default-k8s-diff-port-433034 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node default-k8s-diff-port-433034 event: Registered Node default-k8s-diff-port-433034 in Controller
	  Normal  NodeReady                13s   kubelet          Node default-k8s-diff-port-433034 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.086327] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026088] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.855140] kauditd_printk_skb: 47 callbacks suppressed
	[Dec12 19:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.016395] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023890] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023861] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023912] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023894] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +2.047754] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +4.031589] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +8.448131] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[Dec12 19:32] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[ +32.252714] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	
	
	==> etcd [d233fb4a83f6f02a4017ba3756eb4f26d421db56542ef43746bcc9ce30143cf0] <==
	{"level":"info","ts":"2025-12-12T20:10:59.244299Z","caller":"traceutil/trace.go:172","msg":"trace[1467007764] transaction","detail":"{read_only:false; response_revision:355; number_of_response:1; }","duration":"113.945648ms","start":"2025-12-12T20:10:59.130302Z","end":"2025-12-12T20:10:59.244248Z","steps":["trace[1467007764] 'process raft request'  (duration: 96.346446ms)","trace[1467007764] 'compare'  (duration: 17.46707ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-12T20:10:59.245606Z","caller":"traceutil/trace.go:172","msg":"trace[1857848765] transaction","detail":"{read_only:false; response_revision:357; number_of_response:1; }","duration":"111.479006ms","start":"2025-12-12T20:10:59.134114Z","end":"2025-12-12T20:10:59.245593Z","steps":["trace[1857848765] 'process raft request'  (duration: 111.26015ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T20:10:59.245659Z","caller":"traceutil/trace.go:172","msg":"trace[512051182] transaction","detail":"{read_only:false; number_of_response:1; response_revision:357; }","duration":"108.031825ms","start":"2025-12-12T20:10:59.137613Z","end":"2025-12-12T20:10:59.245645Z","steps":["trace[512051182] 'process raft request'  (duration: 107.811585ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T20:10:59.245922Z","caller":"traceutil/trace.go:172","msg":"trace[1787180031] transaction","detail":"{read_only:false; response_revision:356; number_of_response:1; }","duration":"111.584599ms","start":"2025-12-12T20:10:59.133927Z","end":"2025-12-12T20:10:59.245512Z","steps":["trace[1787180031] 'process raft request'  (duration: 111.362384ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-12T20:10:59.473028Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"151.485815ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-2jgnr\" limit:1 ","response":"range_response_count:1 size:4327"}
	{"level":"info","ts":"2025-12-12T20:10:59.473099Z","caller":"traceutil/trace.go:172","msg":"trace[699169010] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-2jgnr; range_end:; response_count:1; response_revision:359; }","duration":"151.566192ms","start":"2025-12-12T20:10:59.321517Z","end":"2025-12-12T20:10:59.473083Z","steps":["trace[699169010] 'agreement among raft nodes before linearized reading'  (duration: 98.500945ms)","trace[699169010] 'range keys from in-memory index tree'  (duration: 52.873971ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-12T20:10:59.473114Z","caller":"traceutil/trace.go:172","msg":"trace[1451668933] transaction","detail":"{read_only:false; response_revision:361; number_of_response:1; }","duration":"217.042024ms","start":"2025-12-12T20:10:59.256057Z","end":"2025-12-12T20:10:59.473099Z","steps":["trace[1451668933] 'process raft request'  (duration: 216.980684ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T20:10:59.473141Z","caller":"traceutil/trace.go:172","msg":"trace[1015102230] transaction","detail":"{read_only:false; response_revision:360; number_of_response:1; }","duration":"217.492964ms","start":"2025-12-12T20:10:59.255630Z","end":"2025-12-12T20:10:59.473123Z","steps":["trace[1015102230] 'process raft request'  (duration: 164.434619ms)","trace[1015102230] 'compare'  (duration: 52.85786ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-12T20:10:59.475189Z","caller":"traceutil/trace.go:172","msg":"trace[1484932964] transaction","detail":"{read_only:false; response_revision:362; number_of_response:1; }","duration":"153.622434ms","start":"2025-12-12T20:10:59.321550Z","end":"2025-12-12T20:10:59.475173Z","steps":["trace[1484932964] 'process raft request'  (duration: 153.438195ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T20:10:59.475708Z","caller":"traceutil/trace.go:172","msg":"trace[1119402453] transaction","detail":"{read_only:false; response_revision:363; number_of_response:1; }","duration":"151.868521ms","start":"2025-12-12T20:10:59.323829Z","end":"2025-12-12T20:10:59.475698Z","steps":["trace[1119402453] 'process raft request'  (duration: 151.80293ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T20:10:59.613317Z","caller":"traceutil/trace.go:172","msg":"trace[646492647] linearizableReadLoop","detail":"{readStateIndex:377; appliedIndex:377; }","duration":"137.614903ms","start":"2025-12-12T20:10:59.475680Z","end":"2025-12-12T20:10:59.613295Z","steps":["trace[646492647] 'read index received'  (duration: 137.607471ms)","trace[646492647] 'applied index is now lower than readState.Index'  (duration: 6.533µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-12T20:10:59.635780Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"160.079903ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/kube-system/system:persistent-volume-provisioner\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-12T20:10:59.635845Z","caller":"traceutil/trace.go:172","msg":"trace[1577640282] range","detail":"{range_begin:/registry/rolebindings/kube-system/system:persistent-volume-provisioner; range_end:; response_count:0; response_revision:363; }","duration":"160.153589ms","start":"2025-12-12T20:10:59.475676Z","end":"2025-12-12T20:10:59.635830Z","steps":["trace[1577640282] 'agreement among raft nodes before linearized reading'  (duration: 137.714332ms)","trace[1577640282] 'range keys from in-memory index tree'  (duration: 22.339344ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-12T20:10:59.636392Z","caller":"traceutil/trace.go:172","msg":"trace[510280171] transaction","detail":"{read_only:false; response_revision:366; number_of_response:1; }","duration":"154.18969ms","start":"2025-12-12T20:10:59.482190Z","end":"2025-12-12T20:10:59.636380Z","steps":["trace[510280171] 'process raft request'  (duration: 154.151943ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T20:10:59.636527Z","caller":"traceutil/trace.go:172","msg":"trace[205602679] transaction","detail":"{read_only:false; response_revision:364; number_of_response:1; }","duration":"161.020018ms","start":"2025-12-12T20:10:59.475491Z","end":"2025-12-12T20:10:59.636512Z","steps":["trace[205602679] 'process raft request'  (duration: 137.918943ms)","trace[205602679] 'compare'  (duration: 22.602977ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-12T20:10:59.636551Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.979249ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-433034\" limit:1 ","response":"range_response_count:1 size:5663"}
	{"level":"info","ts":"2025-12-12T20:10:59.636928Z","caller":"traceutil/trace.go:172","msg":"trace[1078260602] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-433034; range_end:; response_count:1; response_revision:366; }","duration":"127.363384ms","start":"2025-12-12T20:10:59.509552Z","end":"2025-12-12T20:10:59.636915Z","steps":["trace[1078260602] 'agreement among raft nodes before linearized reading'  (duration: 126.895451ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T20:10:59.638452Z","caller":"traceutil/trace.go:172","msg":"trace[517663355] transaction","detail":"{read_only:false; response_revision:365; number_of_response:1; }","duration":"161.092027ms","start":"2025-12-12T20:10:59.477343Z","end":"2025-12-12T20:10:59.638435Z","steps":["trace[517663355] 'process raft request'  (duration: 158.924296ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T20:10:59.767223Z","caller":"traceutil/trace.go:172","msg":"trace[871084393] linearizableReadLoop","detail":"{readStateIndex:380; appliedIndex:380; }","duration":"127.400381ms","start":"2025-12-12T20:10:59.639801Z","end":"2025-12-12T20:10:59.767201Z","steps":["trace[871084393] 'read index received'  (duration: 127.388167ms)","trace[871084393] 'applied index is now lower than readState.Index'  (duration: 10.955µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-12T20:10:59.907054Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"267.228367ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" limit:1 ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2025-12-12T20:10:59.907114Z","caller":"traceutil/trace.go:172","msg":"trace[90921163] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:366; }","duration":"267.306351ms","start":"2025-12-12T20:10:59.639792Z","end":"2025-12-12T20:10:59.907098Z","steps":["trace[90921163] 'agreement among raft nodes before linearized reading'  (duration: 127.501813ms)","trace[90921163] 'range keys from in-memory index tree'  (duration: 139.624109ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-12T20:10:59.907479Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"140.083853ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790638098406273 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/coredns-66bc5c9577-2jgnr\" mod_revision:364 > success:<request_delete_range:<key:\"/registry/pods/kube-system/coredns-66bc5c9577-2jgnr\" > > failure:<request_range:<key:\"/registry/pods/kube-system/coredns-66bc5c9577-2jgnr\" > >>","response":"size:18"}
	{"level":"info","ts":"2025-12-12T20:10:59.907777Z","caller":"traceutil/trace.go:172","msg":"trace[393169304] transaction","detail":"{read_only:false; number_of_response:1; response_revision:367; }","duration":"268.6504ms","start":"2025-12-12T20:10:59.639099Z","end":"2025-12-12T20:10:59.907749Z","steps":["trace[393169304] 'process raft request'  (duration: 128.119156ms)","trace[393169304] 'compare'  (duration: 139.937277ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-12T20:11:00.244640Z","caller":"traceutil/trace.go:172","msg":"trace[520285697] transaction","detail":"{read_only:false; response_revision:379; number_of_response:1; }","duration":"128.122584ms","start":"2025-12-12T20:11:00.116492Z","end":"2025-12-12T20:11:00.244614Z","steps":["trace[520285697] 'process raft request'  (duration: 122.346375ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T20:11:00.244765Z","caller":"traceutil/trace.go:172","msg":"trace[1829898575] transaction","detail":"{read_only:false; response_revision:380; number_of_response:1; }","duration":"128.009099ms","start":"2025-12-12T20:11:00.116741Z","end":"2025-12-12T20:11:00.244750Z","steps":["trace[1829898575] 'process raft request'  (duration: 127.927791ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:11:22 up 53 min,  0 user,  load average: 4.50, 2.46, 1.66
	Linux default-k8s-diff-port-433034 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4510e609f3f50da4ff99e6dfcf6717c6c926ed346731d268b199c7a3735ba656] <==
	I1212 20:10:59.435466       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1212 20:10:59.435715       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1212 20:10:59.435832       1 main.go:148] setting mtu 1500 for CNI 
	I1212 20:10:59.435851       1 main.go:178] kindnetd IP family: "ipv4"
	I1212 20:10:59.435879       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-12T20:10:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1212 20:10:59.634649       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1212 20:10:59.651702       1 controller.go:381] "Waiting for informer caches to sync"
	I1212 20:10:59.651831       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1212 20:10:59.652166       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1212 20:10:59.952433       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1212 20:10:59.952469       1 metrics.go:72] Registering metrics
	I1212 20:10:59.952526       1 controller.go:711] "Syncing nftables rules"
	I1212 20:11:09.636442       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1212 20:11:09.636519       1 main.go:301] handling current node
	I1212 20:11:19.637374       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1212 20:11:19.637411       1 main.go:301] handling current node
	
	
	==> kube-apiserver [468221615cfd5f8fbcaf6eb2455461fa1449b4af89b3cb8c1ff4025fb842981c] <==
	I1212 20:10:50.310217       1 controller.go:667] quota admission added evaluator for: namespaces
	I1212 20:10:50.316964       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 20:10:50.317193       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1212 20:10:50.325580       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1212 20:10:50.325745       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1212 20:10:50.325974       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 20:10:50.336587       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 20:10:51.212894       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1212 20:10:51.219044       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1212 20:10:51.219061       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 20:10:51.707403       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 20:10:51.746581       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 20:10:51.817119       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1212 20:10:51.824214       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1212 20:10:51.825186       1 controller.go:667] quota admission added evaluator for: endpoints
	I1212 20:10:51.829709       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 20:10:52.244524       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1212 20:10:52.995213       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1212 20:10:53.003144       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1212 20:10:53.010233       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1212 20:10:57.249640       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 20:10:57.257556       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 20:10:57.898402       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1212 20:10:58.048091       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1212 20:11:21.023811       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8444->192.168.103.1:46832: use of closed network connection
	
	
	==> kube-controller-manager [3ffb1a36be41740cfa02256e39dc77a3545bc8f7d344fc8f3ecad7974af0f4d0] <==
	I1212 20:10:57.244351       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1212 20:10:57.244757       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1212 20:10:57.244799       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1212 20:10:57.244892       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1212 20:10:57.245008       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-433034"
	I1212 20:10:57.245059       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1212 20:10:57.245059       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1212 20:10:57.245586       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1212 20:10:57.245627       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1212 20:10:57.246487       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1212 20:10:57.246595       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1212 20:10:57.247647       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1212 20:10:57.253337       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1212 20:10:57.253389       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1212 20:10:57.253424       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1212 20:10:57.253432       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1212 20:10:57.253438       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1212 20:10:57.257690       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 20:10:57.257996       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1212 20:10:57.265182       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 20:10:57.265858       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-433034" podCIDRs=["10.244.0.0/24"]
	I1212 20:10:57.267470       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1212 20:10:57.274334       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1212 20:10:57.274474       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1212 20:11:12.246842       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [ad7ce18c1972265569ec50b032858fc915895806fa0a0ed72003dfe5313526e5] <==
	I1212 20:10:58.962702       1 server_linux.go:53] "Using iptables proxy"
	I1212 20:10:59.030026       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1212 20:10:59.130602       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1212 20:10:59.130645       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1212 20:10:59.130728       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 20:10:59.151439       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 20:10:59.151491       1 server_linux.go:132] "Using iptables Proxier"
	I1212 20:10:59.156552       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 20:10:59.156964       1 server.go:527] "Version info" version="v1.34.2"
	I1212 20:10:59.157000       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 20:10:59.158254       1 config.go:200] "Starting service config controller"
	I1212 20:10:59.158357       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 20:10:59.158401       1 config.go:309] "Starting node config controller"
	I1212 20:10:59.158420       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 20:10:59.158439       1 config.go:106] "Starting endpoint slice config controller"
	I1212 20:10:59.158457       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 20:10:59.158553       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 20:10:59.158560       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 20:10:59.259420       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1212 20:10:59.259455       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 20:10:59.259454       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1212 20:10:59.259492       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [97c3e60d941d2cb0602c811f2999e75b9b27396eec461a131c46adc2d781d4c8] <==
	E1212 20:10:50.272831       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1212 20:10:50.272892       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1212 20:10:50.272932       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1212 20:10:50.272985       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1212 20:10:50.273038       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1212 20:10:50.273106       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1212 20:10:50.273350       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1212 20:10:50.273479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1212 20:10:50.273622       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1212 20:10:50.272838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1212 20:10:50.273847       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1212 20:10:51.116926       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1212 20:10:51.159430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1212 20:10:51.162405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1212 20:10:51.163390       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1212 20:10:51.187010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1212 20:10:51.212651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1212 20:10:51.222722       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1212 20:10:51.225730       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1212 20:10:51.318718       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1212 20:10:51.391472       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1212 20:10:51.419662       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1212 20:10:51.462782       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1212 20:10:51.608017       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1212 20:10:54.468182       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 12 20:10:53 default-k8s-diff-port-433034 kubelet[1313]: I1212 20:10:53.889193    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-433034" podStartSLOduration=1.8891832339999999 podStartE2EDuration="1.889183234s" podCreationTimestamp="2025-12-12 20:10:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 20:10:53.888978956 +0000 UTC m=+1.135922993" watchObservedRunningTime="2025-12-12 20:10:53.889183234 +0000 UTC m=+1.136127246"
	Dec 12 20:10:53 default-k8s-diff-port-433034 kubelet[1313]: I1212 20:10:53.904034    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-433034" podStartSLOduration=1.90401281 podStartE2EDuration="1.90401281s" podCreationTimestamp="2025-12-12 20:10:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 20:10:53.904002958 +0000 UTC m=+1.150946963" watchObservedRunningTime="2025-12-12 20:10:53.90401281 +0000 UTC m=+1.150956821"
	Dec 12 20:10:53 default-k8s-diff-port-433034 kubelet[1313]: I1212 20:10:53.914663    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-433034" podStartSLOduration=2.914643102 podStartE2EDuration="2.914643102s" podCreationTimestamp="2025-12-12 20:10:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 20:10:53.913645112 +0000 UTC m=+1.160589124" watchObservedRunningTime="2025-12-12 20:10:53.914643102 +0000 UTC m=+1.161587113"
	Dec 12 20:10:57 default-k8s-diff-port-433034 kubelet[1313]: I1212 20:10:57.297477    1313 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 12 20:10:57 default-k8s-diff-port-433034 kubelet[1313]: I1212 20:10:57.298165    1313 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 12 20:10:58 default-k8s-diff-port-433034 kubelet[1313]: I1212 20:10:58.158187    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0ecd4179-4c6f-4d19-84fc-4888d574af7d-cni-cfg\") pod \"kindnet-w6vcl\" (UID: \"0ecd4179-4c6f-4d19-84fc-4888d574af7d\") " pod="kube-system/kindnet-w6vcl"
	Dec 12 20:10:58 default-k8s-diff-port-433034 kubelet[1313]: I1212 20:10:58.158234    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0ecd4179-4c6f-4d19-84fc-4888d574af7d-xtables-lock\") pod \"kindnet-w6vcl\" (UID: \"0ecd4179-4c6f-4d19-84fc-4888d574af7d\") " pod="kube-system/kindnet-w6vcl"
	Dec 12 20:10:58 default-k8s-diff-port-433034 kubelet[1313]: I1212 20:10:58.158260    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjzq9\" (UniqueName: \"kubernetes.io/projected/0ecd4179-4c6f-4d19-84fc-4888d574af7d-kube-api-access-sjzq9\") pod \"kindnet-w6vcl\" (UID: \"0ecd4179-4c6f-4d19-84fc-4888d574af7d\") " pod="kube-system/kindnet-w6vcl"
	Dec 12 20:10:58 default-k8s-diff-port-433034 kubelet[1313]: I1212 20:10:58.158300    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c690fe66-e68a-4b9b-a2c5-c43c469d876d-kube-proxy\") pod \"kube-proxy-tmrrg\" (UID: \"c690fe66-e68a-4b9b-a2c5-c43c469d876d\") " pod="kube-system/kube-proxy-tmrrg"
	Dec 12 20:10:58 default-k8s-diff-port-433034 kubelet[1313]: I1212 20:10:58.158325    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c690fe66-e68a-4b9b-a2c5-c43c469d876d-xtables-lock\") pod \"kube-proxy-tmrrg\" (UID: \"c690fe66-e68a-4b9b-a2c5-c43c469d876d\") " pod="kube-system/kube-proxy-tmrrg"
	Dec 12 20:10:58 default-k8s-diff-port-433034 kubelet[1313]: I1212 20:10:58.158343    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c690fe66-e68a-4b9b-a2c5-c43c469d876d-lib-modules\") pod \"kube-proxy-tmrrg\" (UID: \"c690fe66-e68a-4b9b-a2c5-c43c469d876d\") " pod="kube-system/kube-proxy-tmrrg"
	Dec 12 20:10:58 default-k8s-diff-port-433034 kubelet[1313]: I1212 20:10:58.158362    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndfpd\" (UniqueName: \"kubernetes.io/projected/c690fe66-e68a-4b9b-a2c5-c43c469d876d-kube-api-access-ndfpd\") pod \"kube-proxy-tmrrg\" (UID: \"c690fe66-e68a-4b9b-a2c5-c43c469d876d\") " pod="kube-system/kube-proxy-tmrrg"
	Dec 12 20:10:58 default-k8s-diff-port-433034 kubelet[1313]: I1212 20:10:58.158383    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0ecd4179-4c6f-4d19-84fc-4888d574af7d-lib-modules\") pod \"kindnet-w6vcl\" (UID: \"0ecd4179-4c6f-4d19-84fc-4888d574af7d\") " pod="kube-system/kindnet-w6vcl"
	Dec 12 20:11:00 default-k8s-diff-port-433034 kubelet[1313]: I1212 20:11:00.022855    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-w6vcl" podStartSLOduration=2.022829554 podStartE2EDuration="2.022829554s" podCreationTimestamp="2025-12-12 20:10:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 20:10:59.944477757 +0000 UTC m=+7.191421783" watchObservedRunningTime="2025-12-12 20:11:00.022829554 +0000 UTC m=+7.269773566"
	Dec 12 20:11:00 default-k8s-diff-port-433034 kubelet[1313]: I1212 20:11:00.928717    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tmrrg" podStartSLOduration=2.928694375 podStartE2EDuration="2.928694375s" podCreationTimestamp="2025-12-12 20:10:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 20:11:00.023217534 +0000 UTC m=+7.270161546" watchObservedRunningTime="2025-12-12 20:11:00.928694375 +0000 UTC m=+8.175638387"
	Dec 12 20:11:09 default-k8s-diff-port-433034 kubelet[1313]: I1212 20:11:09.969569    1313 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 12 20:11:10 default-k8s-diff-port-433034 kubelet[1313]: I1212 20:11:10.051194    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t94td\" (UniqueName: \"kubernetes.io/projected/69ded131-bbff-4310-a413-9d647707a4bb-kube-api-access-t94td\") pod \"coredns-66bc5c9577-8wnb6\" (UID: \"69ded131-bbff-4310-a413-9d647707a4bb\") " pod="kube-system/coredns-66bc5c9577-8wnb6"
	Dec 12 20:11:10 default-k8s-diff-port-433034 kubelet[1313]: I1212 20:11:10.051253    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/804de711-6d78-48dd-a054-5c08899d72b6-tmp\") pod \"storage-provisioner\" (UID: \"804de711-6d78-48dd-a054-5c08899d72b6\") " pod="kube-system/storage-provisioner"
	Dec 12 20:11:10 default-k8s-diff-port-433034 kubelet[1313]: I1212 20:11:10.051306    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69ded131-bbff-4310-a413-9d647707a4bb-config-volume\") pod \"coredns-66bc5c9577-8wnb6\" (UID: \"69ded131-bbff-4310-a413-9d647707a4bb\") " pod="kube-system/coredns-66bc5c9577-8wnb6"
	Dec 12 20:11:10 default-k8s-diff-port-433034 kubelet[1313]: I1212 20:11:10.051331    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52tkn\" (UniqueName: \"kubernetes.io/projected/804de711-6d78-48dd-a054-5c08899d72b6-kube-api-access-52tkn\") pod \"storage-provisioner\" (UID: \"804de711-6d78-48dd-a054-5c08899d72b6\") " pod="kube-system/storage-provisioner"
	Dec 12 20:11:10 default-k8s-diff-port-433034 kubelet[1313]: I1212 20:11:10.916627    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-8wnb6" podStartSLOduration=12.916606041 podStartE2EDuration="12.916606041s" podCreationTimestamp="2025-12-12 20:10:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 20:11:10.916599109 +0000 UTC m=+18.163543147" watchObservedRunningTime="2025-12-12 20:11:10.916606041 +0000 UTC m=+18.163550054"
	Dec 12 20:11:10 default-k8s-diff-port-433034 kubelet[1313]: I1212 20:11:10.936266    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=10.936218986 podStartE2EDuration="10.936218986s" podCreationTimestamp="2025-12-12 20:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 20:11:10.936181924 +0000 UTC m=+18.183125936" watchObservedRunningTime="2025-12-12 20:11:10.936218986 +0000 UTC m=+18.183162996"
	Dec 12 20:11:12 default-k8s-diff-port-433034 kubelet[1313]: I1212 20:11:12.971161    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bwgw\" (UniqueName: \"kubernetes.io/projected/4c0c6390-93fc-431e-ab56-29f5ec5d45ba-kube-api-access-2bwgw\") pod \"busybox\" (UID: \"4c0c6390-93fc-431e-ab56-29f5ec5d45ba\") " pod="default/busybox"
	Dec 12 20:11:14 default-k8s-diff-port-433034 kubelet[1313]: I1212 20:11:14.929543    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=2.228272075 podStartE2EDuration="2.929522758s" podCreationTimestamp="2025-12-12 20:11:12 +0000 UTC" firstStartedPulling="2025-12-12 20:11:13.215873668 +0000 UTC m=+20.462817674" lastFinishedPulling="2025-12-12 20:11:13.917124351 +0000 UTC m=+21.164068357" observedRunningTime="2025-12-12 20:11:14.929263894 +0000 UTC m=+22.176207905" watchObservedRunningTime="2025-12-12 20:11:14.929522758 +0000 UTC m=+22.176466770"
	Dec 12 20:11:21 default-k8s-diff-port-433034 kubelet[1313]: E1212 20:11:21.023683    1313 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:37120->127.0.0.1:46583: write tcp 127.0.0.1:37120->127.0.0.1:46583: write: broken pipe
	
	
	==> storage-provisioner [a3ee9795d6f18fdef1f1fd87e5762106efc920a5c3b4066f4292f418a55b4fae] <==
	I1212 20:11:10.361363       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 20:11:10.371766       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 20:11:10.371829       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1212 20:11:10.374190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:11:10.379588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1212 20:11:10.379786       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 20:11:10.379867       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"993049af-7bb6-48bb-a2c2-ac2e2f6fa3e3", APIVersion:"v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-433034_19664fe6-7cad-45a5-a30e-239b00064602 became leader
	I1212 20:11:10.379940       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-433034_19664fe6-7cad-45a5-a30e-239b00064602!
	W1212 20:11:10.381345       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:11:10.386700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1212 20:11:10.480059       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-433034_19664fe6-7cad-45a5-a30e-239b00064602!
	W1212 20:11:12.390000       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:11:12.394539       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:11:14.398160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:11:14.401805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:11:16.405145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:11:16.408893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:11:18.412344       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:11:18.416564       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:11:20.419762       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:11:20.423487       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:11:22.432978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:11:22.449816       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-433034 -n default-k8s-diff-port-433034
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-433034 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5.74s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-832562 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-832562 --alsologtostderr -v=1: exit status 80 (2.300140046s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-832562 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:11:23.121843  309761 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:11:23.122121  309761 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:11:23.122132  309761 out.go:374] Setting ErrFile to fd 2...
	I1212 20:11:23.122136  309761 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:11:23.122395  309761 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 20:11:23.122669  309761 out.go:368] Setting JSON to false
	I1212 20:11:23.122690  309761 mustload.go:66] Loading cluster: newest-cni-832562
	I1212 20:11:23.123111  309761 config.go:182] Loaded profile config "newest-cni-832562": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:11:23.123548  309761 cli_runner.go:164] Run: docker container inspect newest-cni-832562 --format={{.State.Status}}
	I1212 20:11:23.142782  309761 host.go:66] Checking if "newest-cni-832562" exists ...
	I1212 20:11:23.143130  309761 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:11:23.205045  309761 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:85 SystemTime:2025-12-12 20:11:23.195478811 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 20:11:23.205844  309761 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22112/minikube-v1.37.0-1765505725-22112-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765505725-22112/minikube-v1.37.0-1765505725-22112-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765505725-22112-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-832562 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1212 20:11:23.207673  309761 out.go:179] * Pausing node newest-cni-832562 ... 
	I1212 20:11:23.208837  309761 host.go:66] Checking if "newest-cni-832562" exists ...
	I1212 20:11:23.209171  309761 ssh_runner.go:195] Run: systemctl --version
	I1212 20:11:23.209219  309761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:11:23.237416  309761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/newest-cni-832562/id_rsa Username:docker}
	I1212 20:11:23.339247  309761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:11:23.353660  309761 pause.go:52] kubelet running: true
	I1212 20:11:23.353736  309761 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 20:11:23.561746  309761 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 20:11:23.561899  309761 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 20:11:23.692686  309761 cri.go:89] found id: "20ef95e4dd71f468167540741bdcb99d654b925b1576e557dbd0efeb504685b1"
	I1212 20:11:23.692716  309761 cri.go:89] found id: "1a65b654bdace8f48adbaa2ad10141fbcf9cb7ef682a91ba514aeb6d10554697"
	I1212 20:11:23.692723  309761 cri.go:89] found id: "302da1b0b4b49d8184afdd2afaccda38c21edb87a4612a1dc37701a62340f511"
	I1212 20:11:23.692729  309761 cri.go:89] found id: "f0a7c03f08d77407822e1d8f041f02ceb34d3703a2fae8bc8ce0492d7f51f8d1"
	I1212 20:11:23.692733  309761 cri.go:89] found id: "41418d6b64580bd178a2682078ca82622588d0949f2b8a780d7e198c24ad245f"
	I1212 20:11:23.692738  309761 cri.go:89] found id: "cf33221a5bf2511a5c4dcc0fef48a4b8caf2e2b4b846415a5686cd3646cae564"
	I1212 20:11:23.692742  309761 cri.go:89] found id: ""
	I1212 20:11:23.692799  309761 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 20:11:23.710592  309761 retry.go:31] will retry after 184.750558ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:11:23Z" level=error msg="open /run/runc: no such file or directory"
	I1212 20:11:23.896178  309761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:11:23.909594  309761 pause.go:52] kubelet running: false
	I1212 20:11:23.909663  309761 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 20:11:24.042927  309761 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 20:11:24.042984  309761 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 20:11:24.121317  309761 cri.go:89] found id: "20ef95e4dd71f468167540741bdcb99d654b925b1576e557dbd0efeb504685b1"
	I1212 20:11:24.121342  309761 cri.go:89] found id: "1a65b654bdace8f48adbaa2ad10141fbcf9cb7ef682a91ba514aeb6d10554697"
	I1212 20:11:24.121348  309761 cri.go:89] found id: "302da1b0b4b49d8184afdd2afaccda38c21edb87a4612a1dc37701a62340f511"
	I1212 20:11:24.121351  309761 cri.go:89] found id: "f0a7c03f08d77407822e1d8f041f02ceb34d3703a2fae8bc8ce0492d7f51f8d1"
	I1212 20:11:24.121355  309761 cri.go:89] found id: "41418d6b64580bd178a2682078ca82622588d0949f2b8a780d7e198c24ad245f"
	I1212 20:11:24.121358  309761 cri.go:89] found id: "cf33221a5bf2511a5c4dcc0fef48a4b8caf2e2b4b846415a5686cd3646cae564"
	I1212 20:11:24.121361  309761 cri.go:89] found id: ""
	I1212 20:11:24.121396  309761 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 20:11:24.132768  309761 retry.go:31] will retry after 314.782691ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:11:24Z" level=error msg="open /run/runc: no such file or directory"
	I1212 20:11:24.448339  309761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:11:24.461038  309761 pause.go:52] kubelet running: false
	I1212 20:11:24.461097  309761 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 20:11:24.614869  309761 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 20:11:24.614967  309761 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 20:11:24.702699  309761 cri.go:89] found id: "20ef95e4dd71f468167540741bdcb99d654b925b1576e557dbd0efeb504685b1"
	I1212 20:11:24.702723  309761 cri.go:89] found id: "1a65b654bdace8f48adbaa2ad10141fbcf9cb7ef682a91ba514aeb6d10554697"
	I1212 20:11:24.702728  309761 cri.go:89] found id: "302da1b0b4b49d8184afdd2afaccda38c21edb87a4612a1dc37701a62340f511"
	I1212 20:11:24.702735  309761 cri.go:89] found id: "f0a7c03f08d77407822e1d8f041f02ceb34d3703a2fae8bc8ce0492d7f51f8d1"
	I1212 20:11:24.702740  309761 cri.go:89] found id: "41418d6b64580bd178a2682078ca82622588d0949f2b8a780d7e198c24ad245f"
	I1212 20:11:24.702745  309761 cri.go:89] found id: "cf33221a5bf2511a5c4dcc0fef48a4b8caf2e2b4b846415a5686cd3646cae564"
	I1212 20:11:24.702749  309761 cri.go:89] found id: ""
	I1212 20:11:24.702796  309761 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 20:11:24.716032  309761 retry.go:31] will retry after 430.151605ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:11:24Z" level=error msg="open /run/runc: no such file or directory"
	I1212 20:11:25.146612  309761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:11:25.159106  309761 pause.go:52] kubelet running: false
	I1212 20:11:25.159161  309761 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 20:11:25.270630  309761 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 20:11:25.270705  309761 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 20:11:25.333832  309761 cri.go:89] found id: "20ef95e4dd71f468167540741bdcb99d654b925b1576e557dbd0efeb504685b1"
	I1212 20:11:25.333863  309761 cri.go:89] found id: "1a65b654bdace8f48adbaa2ad10141fbcf9cb7ef682a91ba514aeb6d10554697"
	I1212 20:11:25.333869  309761 cri.go:89] found id: "302da1b0b4b49d8184afdd2afaccda38c21edb87a4612a1dc37701a62340f511"
	I1212 20:11:25.333874  309761 cri.go:89] found id: "f0a7c03f08d77407822e1d8f041f02ceb34d3703a2fae8bc8ce0492d7f51f8d1"
	I1212 20:11:25.333879  309761 cri.go:89] found id: "41418d6b64580bd178a2682078ca82622588d0949f2b8a780d7e198c24ad245f"
	I1212 20:11:25.333884  309761 cri.go:89] found id: "cf33221a5bf2511a5c4dcc0fef48a4b8caf2e2b4b846415a5686cd3646cae564"
	I1212 20:11:25.333888  309761 cri.go:89] found id: ""
	I1212 20:11:25.333931  309761 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 20:11:25.347748  309761 out.go:203] 
	W1212 20:11:25.348892  309761 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:11:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:11:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 20:11:25.348909  309761 out.go:285] * 
	* 
	W1212 20:11:25.353499  309761 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:11:25.354656  309761 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-832562 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-832562
helpers_test.go:244: (dbg) docker inspect newest-cni-832562:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2b8b85447870a65328af6bff8a5fc0386a7f78d18530ae3dd075c8b98c68fdcd",
	        "Created": "2025-12-12T20:10:44.178344468Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 306664,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T20:11:12.791871124Z",
	            "FinishedAt": "2025-12-12T20:11:11.916111093Z"
	        },
	        "Image": "sha256:1ca69fb46d667873e297ff8975852d6be818eb624529e750e2b09ff0d3b0c367",
	        "ResolvConfPath": "/var/lib/docker/containers/2b8b85447870a65328af6bff8a5fc0386a7f78d18530ae3dd075c8b98c68fdcd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2b8b85447870a65328af6bff8a5fc0386a7f78d18530ae3dd075c8b98c68fdcd/hostname",
	        "HostsPath": "/var/lib/docker/containers/2b8b85447870a65328af6bff8a5fc0386a7f78d18530ae3dd075c8b98c68fdcd/hosts",
	        "LogPath": "/var/lib/docker/containers/2b8b85447870a65328af6bff8a5fc0386a7f78d18530ae3dd075c8b98c68fdcd/2b8b85447870a65328af6bff8a5fc0386a7f78d18530ae3dd075c8b98c68fdcd-json.log",
	        "Name": "/newest-cni-832562",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-832562:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-832562",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2b8b85447870a65328af6bff8a5fc0386a7f78d18530ae3dd075c8b98c68fdcd",
	                "LowerDir": "/var/lib/docker/overlay2/31f493d46db95581b1e542e90a5e9ebb6d2f9f3cb581088f2c1a7fe49a4c1d63-init/diff:/var/lib/docker/overlay2/87d8f1d6be832394c20ec55eb084b3f4a084ca786856584bd9e90cdb45432d3c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/31f493d46db95581b1e542e90a5e9ebb6d2f9f3cb581088f2c1a7fe49a4c1d63/merged",
	                "UpperDir": "/var/lib/docker/overlay2/31f493d46db95581b1e542e90a5e9ebb6d2f9f3cb581088f2c1a7fe49a4c1d63/diff",
	                "WorkDir": "/var/lib/docker/overlay2/31f493d46db95581b1e542e90a5e9ebb6d2f9f3cb581088f2c1a7fe49a4c1d63/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-832562",
	                "Source": "/var/lib/docker/volumes/newest-cni-832562/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-832562",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-832562",
	                "name.minikube.sigs.k8s.io": "newest-cni-832562",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a73404c8253f2d70c853528bde9a625fc51640e010c48cad84fefe7a3d59c03e",
	            "SandboxKey": "/var/run/docker/netns/a73404c8253f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-832562": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5b0e30eb6e7a6239611b037a06cb38c24c42431a49eddf41a41622bd55f96edd",
	                    "EndpointID": "fad7f7148ce91e2c771942a60c9744047310038477cd15373b6d3d2214e2006f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "7a:3e:77:8d:91:d5",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-832562",
	                        "2b8b85447870"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-832562 -n newest-cni-832562
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-832562 -n newest-cni-832562: exit status 2 (315.018565ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-832562 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ delete  │ -p stopped-upgrade-180826                                                                                                                                                                                                                            │ stopped-upgrade-180826       │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ start   │ -p kubernetes-upgrade-991615 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                    │ kubernetes-upgrade-991615    │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ start   │ -p kubernetes-upgrade-991615 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-991615    │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ start   │ -p default-k8s-diff-port-433034 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-433034 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:11 UTC │
	│ image   │ old-k8s-version-824670 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ pause   │ -p old-k8s-version-824670 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-991615                                                                                                                                                                                                                         │ kubernetes-upgrade-991615    │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ delete  │ -p old-k8s-version-824670                                                                                                                                                                                                                            │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ start   │ -p newest-cni-832562 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-832562            │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:11 UTC │
	│ delete  │ -p old-k8s-version-824670                                                                                                                                                                                                                            │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ delete  │ -p disable-driver-mounts-044739                                                                                                                                                                                                                      │ disable-driver-mounts-044739 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ start   │ -p embed-certs-399565 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-399565           │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ image   │ no-preload-753103 image list --format=json                                                                                                                                                                                                           │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ pause   │ -p no-preload-753103 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ delete  │ -p no-preload-753103                                                                                                                                                                                                                                 │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ delete  │ -p no-preload-753103                                                                                                                                                                                                                                 │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ start   │ -p auto-789448 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                              │ auto-789448                  │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-832562 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-832562            │ jenkins │ v1.37.0 │ 12 Dec 25 20:11 UTC │                     │
	│ stop    │ -p newest-cni-832562 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-832562            │ jenkins │ v1.37.0 │ 12 Dec 25 20:11 UTC │ 12 Dec 25 20:11 UTC │
	│ addons  │ enable dashboard -p newest-cni-832562 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-832562            │ jenkins │ v1.37.0 │ 12 Dec 25 20:11 UTC │ 12 Dec 25 20:11 UTC │
	│ start   │ -p newest-cni-832562 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-832562            │ jenkins │ v1.37.0 │ 12 Dec 25 20:11 UTC │ 12 Dec 25 20:11 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-433034 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-433034 │ jenkins │ v1.37.0 │ 12 Dec 25 20:11 UTC │                     │
	│ image   │ newest-cni-832562 image list --format=json                                                                                                                                                                                                           │ newest-cni-832562            │ jenkins │ v1.37.0 │ 12 Dec 25 20:11 UTC │ 12 Dec 25 20:11 UTC │
	│ pause   │ -p newest-cni-832562 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-832562            │ jenkins │ v1.37.0 │ 12 Dec 25 20:11 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-433034 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-433034 │ jenkins │ v1.37.0 │ 12 Dec 25 20:11 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:11:12
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:11:12.532737  306436 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:11:12.532986  306436 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:11:12.532995  306436 out.go:374] Setting ErrFile to fd 2...
	I1212 20:11:12.532999  306436 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:11:12.533167  306436 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 20:11:12.533560  306436 out.go:368] Setting JSON to false
	I1212 20:11:12.534675  306436 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3219,"bootTime":1765567053,"procs":306,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 20:11:12.534737  306436 start.go:143] virtualization: kvm guest
	I1212 20:11:12.536616  306436 out.go:179] * [newest-cni-832562] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 20:11:12.537669  306436 notify.go:221] Checking for updates...
	I1212 20:11:12.537685  306436 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:11:12.538838  306436 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:11:12.540254  306436 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 20:11:12.541433  306436 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-5703/.minikube
	I1212 20:11:12.542571  306436 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 20:11:12.543568  306436 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:11:12.544902  306436 config.go:182] Loaded profile config "newest-cni-832562": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:11:12.545459  306436 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:11:12.570695  306436 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1212 20:11:12.570780  306436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:11:12.629539  306436 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-12 20:11:12.618700808 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 20:11:12.629686  306436 docker.go:319] overlay module found
	I1212 20:11:12.631198  306436 out.go:179] * Using the docker driver based on existing profile
	I1212 20:11:11.426717  289770 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-433034" is "Ready"
	I1212 20:11:11.426743  289770 pod_ready.go:86] duration metric: took 384.436254ms for pod "kube-controller-manager-default-k8s-diff-port-433034" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:11:11.625409  289770 pod_ready.go:83] waiting for pod "kube-proxy-tmrrg" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:11:12.026631  289770 pod_ready.go:94] pod "kube-proxy-tmrrg" is "Ready"
	I1212 20:11:12.026656  289770 pod_ready.go:86] duration metric: took 401.222833ms for pod "kube-proxy-tmrrg" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:11:12.227624  289770 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-433034" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:11:12.626733  289770 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-433034" is "Ready"
	I1212 20:11:12.626762  289770 pod_ready.go:86] duration metric: took 399.116059ms for pod "kube-scheduler-default-k8s-diff-port-433034" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:11:12.626778  289770 pod_ready.go:40] duration metric: took 1.604405948s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 20:11:12.686012  289770 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1212 20:11:12.687473  289770 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-433034" cluster and "default" namespace by default
	I1212 20:11:12.632248  306436 start.go:309] selected driver: docker
	I1212 20:11:12.632261  306436 start.go:927] validating driver "docker" against &{Name:newest-cni-832562 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-832562 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:11:12.632407  306436 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:11:12.633116  306436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:11:12.702610  306436 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-12 20:11:12.690486315 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 20:11:12.702958  306436 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1212 20:11:12.702986  306436 cni.go:84] Creating CNI manager for ""
	I1212 20:11:12.703053  306436 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:11:12.703091  306436 start.go:353] cluster config:
	{Name:newest-cni-832562 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-832562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:11:12.704612  306436 out.go:179] * Starting "newest-cni-832562" primary control-plane node in "newest-cni-832562" cluster
	I1212 20:11:12.705863  306436 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 20:11:12.709546  306436 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 20:11:12.710732  306436 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 20:11:12.710861  306436 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1212 20:11:12.710872  306436 cache.go:65] Caching tarball of preloaded images
	I1212 20:11:12.710930  306436 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 20:11:12.711254  306436 preload.go:238] Found /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 20:11:12.711713  306436 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1212 20:11:12.711874  306436 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/newest-cni-832562/config.json ...
	I1212 20:11:12.740476  306436 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 20:11:12.740499  306436 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 20:11:12.740515  306436 cache.go:243] Successfully downloaded all kic artifacts
	I1212 20:11:12.740550  306436 start.go:360] acquireMachinesLock for newest-cni-832562: {Name:mk09681eb0bd95476952ca6616e7bf9ebfe66f0b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:11:12.740607  306436 start.go:364] duration metric: took 36.955µs to acquireMachinesLock for "newest-cni-832562"
	I1212 20:11:12.740626  306436 start.go:96] Skipping create...Using existing machine configuration
	I1212 20:11:12.740633  306436 fix.go:54] fixHost starting: 
	I1212 20:11:12.740922  306436 cli_runner.go:164] Run: docker container inspect newest-cni-832562 --format={{.State.Status}}
	I1212 20:11:12.762201  306436 fix.go:112] recreateIfNeeded on newest-cni-832562: state=Stopped err=<nil>
	W1212 20:11:12.762227  306436 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 20:11:12.160118  295304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:11:12.659749  295304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:11:13.162390  295304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:11:13.247549  295304 kubeadm.go:1114] duration metric: took 4.684925554s to wait for elevateKubeSystemPrivileges
	I1212 20:11:13.247586  295304 kubeadm.go:403] duration metric: took 17.129842196s to StartCluster
	I1212 20:11:13.247609  295304 settings.go:142] acquiring lock: {Name:mk7b47e673a4fe47e1d03b804f21c4b8c19bdcb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:11:13.247674  295304 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 20:11:13.249680  295304 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/kubeconfig: {Name:mkf945a9079d724e4b1deff5cd122f8b2719dc1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:11:13.250021  295304 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 20:11:13.250039  295304 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:11:13.250112  295304 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 20:11:13.250202  295304 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-399565"
	I1212 20:11:13.250219  295304 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-399565"
	I1212 20:11:13.250230  295304 addons.go:70] Setting default-storageclass=true in profile "embed-certs-399565"
	I1212 20:11:13.250240  295304 config.go:182] Loaded profile config "embed-certs-399565": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:11:13.250249  295304 host.go:66] Checking if "embed-certs-399565" exists ...
	I1212 20:11:13.250293  295304 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-399565"
	I1212 20:11:13.250642  295304 cli_runner.go:164] Run: docker container inspect embed-certs-399565 --format={{.State.Status}}
	I1212 20:11:13.251092  295304 cli_runner.go:164] Run: docker container inspect embed-certs-399565 --format={{.State.Status}}
	I1212 20:11:13.251535  295304 out.go:179] * Verifying Kubernetes components...
	I1212 20:11:13.252738  295304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:11:13.278937  295304 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:11:13.280251  295304 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:11:13.280335  295304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 20:11:13.280437  295304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-399565
	I1212 20:11:13.282471  295304 addons.go:239] Setting addon default-storageclass=true in "embed-certs-399565"
	I1212 20:11:13.282550  295304 host.go:66] Checking if "embed-certs-399565" exists ...
	I1212 20:11:13.283069  295304 cli_runner.go:164] Run: docker container inspect embed-certs-399565 --format={{.State.Status}}
	I1212 20:11:13.310936  295304 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/embed-certs-399565/id_rsa Username:docker}
	I1212 20:11:13.314744  295304 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 20:11:13.314765  295304 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 20:11:13.314906  295304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-399565
	I1212 20:11:13.337251  295304 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/embed-certs-399565/id_rsa Username:docker}
	I1212 20:11:13.353114  295304 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 20:11:13.417391  295304 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:11:13.420847  295304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:11:13.448465  295304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:11:13.533431  295304 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1212 20:11:13.534253  295304 node_ready.go:35] waiting up to 6m0s for node "embed-certs-399565" to be "Ready" ...
	I1212 20:11:13.742444  295304 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1212 20:11:13.743388  295304 addons.go:530] duration metric: took 493.280619ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1212 20:11:14.039742  295304 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-399565" context rescaled to 1 replicas
	W1212 20:11:15.537067  295304 node_ready.go:57] node "embed-certs-399565" has "Ready":"False" status (will retry)
	I1212 20:11:12.763744  306436 out.go:252] * Restarting existing docker container for "newest-cni-832562" ...
	I1212 20:11:12.763824  306436 cli_runner.go:164] Run: docker start newest-cni-832562
	I1212 20:11:13.023832  306436 cli_runner.go:164] Run: docker container inspect newest-cni-832562 --format={{.State.Status}}
	I1212 20:11:13.046680  306436 kic.go:430] container "newest-cni-832562" state is running.
	I1212 20:11:13.047113  306436 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-832562
	I1212 20:11:13.068770  306436 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/newest-cni-832562/config.json ...
	I1212 20:11:13.069032  306436 machine.go:94] provisionDockerMachine start ...
	I1212 20:11:13.069098  306436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:11:13.089330  306436 main.go:143] libmachine: Using SSH client type: native
	I1212 20:11:13.089573  306436 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1212 20:11:13.089588  306436 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 20:11:13.090218  306436 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49200->127.0.0.1:33099: read: connection reset by peer
	I1212 20:11:16.222759  306436 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-832562
	
	I1212 20:11:16.222785  306436 ubuntu.go:182] provisioning hostname "newest-cni-832562"
	I1212 20:11:16.222834  306436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:11:16.241438  306436 main.go:143] libmachine: Using SSH client type: native
	I1212 20:11:16.241751  306436 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1212 20:11:16.241768  306436 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-832562 && echo "newest-cni-832562" | sudo tee /etc/hostname
	I1212 20:11:16.380807  306436 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-832562
	
	I1212 20:11:16.380888  306436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:11:16.398960  306436 main.go:143] libmachine: Using SSH client type: native
	I1212 20:11:16.399163  306436 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1212 20:11:16.399179  306436 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-832562' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-832562/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-832562' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:11:16.530634  306436 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:11:16.530659  306436 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-5703/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-5703/.minikube}
	I1212 20:11:16.530681  306436 ubuntu.go:190] setting up certificates
	I1212 20:11:16.530691  306436 provision.go:84] configureAuth start
	I1212 20:11:16.530749  306436 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-832562
	I1212 20:11:16.548912  306436 provision.go:143] copyHostCerts
	I1212 20:11:16.548982  306436 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-5703/.minikube/cert.pem, removing ...
	I1212 20:11:16.548998  306436 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-5703/.minikube/cert.pem
	I1212 20:11:16.549073  306436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-5703/.minikube/cert.pem (1123 bytes)
	I1212 20:11:16.549266  306436 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-5703/.minikube/key.pem, removing ...
	I1212 20:11:16.549294  306436 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-5703/.minikube/key.pem
	I1212 20:11:16.549341  306436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-5703/.minikube/key.pem (1679 bytes)
	I1212 20:11:16.549441  306436 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-5703/.minikube/ca.pem, removing ...
	I1212 20:11:16.549451  306436 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-5703/.minikube/ca.pem
	I1212 20:11:16.549488  306436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-5703/.minikube/ca.pem (1078 bytes)
	I1212 20:11:16.549559  306436 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-5703/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca-key.pem org=jenkins.newest-cni-832562 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-832562]
	I1212 20:11:16.636954  306436 provision.go:177] copyRemoteCerts
	I1212 20:11:16.637013  306436 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:11:16.637053  306436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:11:16.655185  306436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/newest-cni-832562/id_rsa Username:docker}
	I1212 20:11:16.749983  306436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 20:11:16.766255  306436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 20:11:16.782372  306436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 20:11:16.798676  306436 provision.go:87] duration metric: took 267.965188ms to configureAuth
	I1212 20:11:16.798702  306436 ubuntu.go:206] setting minikube options for container-runtime
	I1212 20:11:16.798853  306436 config.go:182] Loaded profile config "newest-cni-832562": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:11:16.798944  306436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:11:16.816825  306436 main.go:143] libmachine: Using SSH client type: native
	I1212 20:11:16.817017  306436 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1212 20:11:16.817034  306436 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 20:11:17.128705  306436 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 20:11:17.128730  306436 machine.go:97] duration metric: took 4.059681977s to provisionDockerMachine
	I1212 20:11:17.128745  306436 start.go:293] postStartSetup for "newest-cni-832562" (driver="docker")
	I1212 20:11:17.128761  306436 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:11:17.128838  306436 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:11:17.128884  306436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:11:17.149194  306436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/newest-cni-832562/id_rsa Username:docker}
	I1212 20:11:17.250736  306436 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:11:17.254854  306436 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 20:11:17.254886  306436 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 20:11:17.254899  306436 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-5703/.minikube/addons for local assets ...
	I1212 20:11:17.254950  306436 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-5703/.minikube/files for local assets ...
	I1212 20:11:17.255040  306436 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/ssl/certs/92542.pem -> 92542.pem in /etc/ssl/certs
	I1212 20:11:17.255125  306436 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 20:11:17.264376  306436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/ssl/certs/92542.pem --> /etc/ssl/certs/92542.pem (1708 bytes)
	I1212 20:11:17.284781  306436 start.go:296] duration metric: took 156.020863ms for postStartSetup
	I1212 20:11:17.284867  306436 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:11:17.284913  306436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:11:17.305853  306436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/newest-cni-832562/id_rsa Username:docker}
	I1212 20:11:17.402518  306436 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 20:11:17.406823  306436 fix.go:56] duration metric: took 4.666186138s for fixHost
	I1212 20:11:17.406851  306436 start.go:83] releasing machines lock for "newest-cni-832562", held for 4.666234992s
	I1212 20:11:17.406917  306436 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-832562
	I1212 20:11:17.424782  306436 ssh_runner.go:195] Run: cat /version.json
	I1212 20:11:17.424802  306436 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 20:11:17.424837  306436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:11:17.424858  306436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:11:17.442825  306436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/newest-cni-832562/id_rsa Username:docker}
	I1212 20:11:17.443862  306436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/newest-cni-832562/id_rsa Username:docker}
	I1212 20:11:17.534872  306436 ssh_runner.go:195] Run: systemctl --version
	I1212 20:11:17.600181  306436 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 20:11:17.641146  306436 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 20:11:17.646092  306436 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:11:17.646167  306436 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:11:17.654310  306436 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 20:11:17.654332  306436 start.go:496] detecting cgroup driver to use...
	I1212 20:11:17.654363  306436 detect.go:190] detected "systemd" cgroup driver on host os
	I1212 20:11:17.654404  306436 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:11:17.669500  306436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:11:17.681081  306436 docker.go:218] disabling cri-docker service (if available) ...
	I1212 20:11:17.681134  306436 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 20:11:17.694386  306436 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 20:11:17.705620  306436 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 20:11:17.784550  306436 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 20:11:17.861606  306436 docker.go:234] disabling docker service ...
	I1212 20:11:17.861656  306436 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 20:11:17.875336  306436 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 20:11:17.888438  306436 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 20:11:17.971427  306436 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 20:11:18.073018  306436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:11:18.084838  306436 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:11:18.098527  306436 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 20:11:18.098580  306436 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:11:18.107046  306436 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1212 20:11:18.107111  306436 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:11:18.116104  306436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:11:18.124558  306436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:11:18.132638  306436 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 20:11:18.140230  306436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:11:18.149072  306436 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:11:18.158072  306436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:11:18.168037  306436 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 20:11:18.176007  306436 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 20:11:18.183229  306436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:11:18.288050  306436 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 20:11:18.427103  306436 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 20:11:18.427179  306436 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 20:11:18.431178  306436 start.go:564] Will wait 60s for crictl version
	I1212 20:11:18.431236  306436 ssh_runner.go:195] Run: which crictl
	I1212 20:11:18.434958  306436 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 20:11:18.459474  306436 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 20:11:18.459547  306436 ssh_runner.go:195] Run: crio --version
	I1212 20:11:18.486435  306436 ssh_runner.go:195] Run: crio --version
	I1212 20:11:18.514372  306436 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1212 20:11:18.515327  306436 cli_runner.go:164] Run: docker network inspect newest-cni-832562 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:11:18.531943  306436 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1212 20:11:18.536350  306436 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:11:18.548096  306436 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1212 20:11:18.991510  301411 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1212 20:11:18.991612  301411 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 20:11:18.991704  301411 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 20:11:18.991752  301411 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1212 20:11:18.991819  301411 kubeadm.go:319] OS: Linux
	I1212 20:11:18.991896  301411 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 20:11:18.991940  301411 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 20:11:18.991989  301411 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 20:11:18.992047  301411 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 20:11:18.992141  301411 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 20:11:18.992226  301411 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 20:11:18.992354  301411 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 20:11:18.992466  301411 kubeadm.go:319] CGROUPS_IO: enabled
	I1212 20:11:18.992570  301411 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 20:11:18.992682  301411 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 20:11:18.992765  301411 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 20:11:18.992819  301411 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 20:11:18.994628  301411 out.go:252]   - Generating certificates and keys ...
	I1212 20:11:18.994711  301411 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 20:11:18.994809  301411 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 20:11:18.994900  301411 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 20:11:18.994976  301411 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 20:11:18.995071  301411 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 20:11:18.995158  301411 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 20:11:18.995244  301411 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 20:11:18.995445  301411 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-789448 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1212 20:11:18.995531  301411 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 20:11:18.995672  301411 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-789448 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1212 20:11:18.995783  301411 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 20:11:18.995852  301411 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 20:11:18.995893  301411 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 20:11:18.995963  301411 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 20:11:18.996022  301411 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 20:11:18.996090  301411 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 20:11:18.996165  301411 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 20:11:18.996286  301411 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 20:11:18.996370  301411 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 20:11:18.996501  301411 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 20:11:18.996557  301411 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 20:11:18.997850  301411 out.go:252]   - Booting up control plane ...
	I1212 20:11:18.997970  301411 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 20:11:18.998091  301411 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 20:11:18.998188  301411 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 20:11:18.998364  301411 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:11:18.998473  301411 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 20:11:18.998564  301411 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 20:11:18.998691  301411 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:11:18.998761  301411 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 20:11:18.998930  301411 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 20:11:18.999095  301411 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 20:11:18.999181  301411 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.001160152s
	I1212 20:11:18.999321  301411 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1212 20:11:18.999437  301411 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1212 20:11:18.999573  301411 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1212 20:11:18.999679  301411 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1212 20:11:18.999786  301411 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.861418367s
	I1212 20:11:18.999870  301411 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.421043224s
	I1212 20:11:18.999967  301411 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501427281s
	I1212 20:11:19.000092  301411 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 20:11:19.000238  301411 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 20:11:19.000296  301411 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 20:11:19.000475  301411 kubeadm.go:319] [mark-control-plane] Marking the node auto-789448 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 20:11:19.000557  301411 kubeadm.go:319] [bootstrap-token] Using token: 37si91.mktn1vtsbp7n8vf2
	I1212 20:11:19.001847  301411 out.go:252]   - Configuring RBAC rules ...
	I1212 20:11:19.001969  301411 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 20:11:19.002045  301411 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 20:11:19.002169  301411 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 20:11:19.002361  301411 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 20:11:19.002516  301411 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 20:11:19.002620  301411 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 20:11:19.002758  301411 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 20:11:19.002838  301411 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1212 20:11:19.002907  301411 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1212 20:11:19.002924  301411 kubeadm.go:319] 
	I1212 20:11:19.003018  301411 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1212 20:11:19.003027  301411 kubeadm.go:319] 
	I1212 20:11:19.003143  301411 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1212 20:11:19.003156  301411 kubeadm.go:319] 
	I1212 20:11:19.003198  301411 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1212 20:11:19.003296  301411 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 20:11:19.003377  301411 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 20:11:19.003393  301411 kubeadm.go:319] 
	I1212 20:11:19.003453  301411 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1212 20:11:19.003463  301411 kubeadm.go:319] 
	I1212 20:11:19.003508  301411 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 20:11:19.003514  301411 kubeadm.go:319] 
	I1212 20:11:19.003573  301411 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1212 20:11:19.003682  301411 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 20:11:19.003798  301411 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 20:11:19.003807  301411 kubeadm.go:319] 
	I1212 20:11:19.003932  301411 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 20:11:19.004037  301411 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1212 20:11:19.004060  301411 kubeadm.go:319] 
	I1212 20:11:19.004167  301411 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 37si91.mktn1vtsbp7n8vf2 \
	I1212 20:11:19.004303  301411 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c49fb95e06ac971f3e9f3fbce39707c55b5d19bac31f7f0c0750449d5e9f38c \
	I1212 20:11:19.004334  301411 kubeadm.go:319] 	--control-plane 
	I1212 20:11:19.004343  301411 kubeadm.go:319] 
	I1212 20:11:19.004443  301411 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1212 20:11:19.004454  301411 kubeadm.go:319] 
	I1212 20:11:19.004561  301411 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 37si91.mktn1vtsbp7n8vf2 \
	I1212 20:11:19.004687  301411 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c49fb95e06ac971f3e9f3fbce39707c55b5d19bac31f7f0c0750449d5e9f38c 
	I1212 20:11:19.004700  301411 cni.go:84] Creating CNI manager for ""
	I1212 20:11:19.004709  301411 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:11:19.006102  301411 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1212 20:11:18.549351  306436 kubeadm.go:884] updating cluster {Name:newest-cni-832562 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-832562 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 20:11:18.549491  306436 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 20:11:18.549552  306436 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:11:18.581474  306436 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:11:18.581492  306436 crio.go:433] Images already preloaded, skipping extraction
	I1212 20:11:18.581529  306436 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:11:18.606848  306436 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:11:18.606866  306436 cache_images.go:86] Images are preloaded, skipping loading
	I1212 20:11:18.606879  306436 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1212 20:11:18.606969  306436 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-832562 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-832562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 20:11:18.607028  306436 ssh_runner.go:195] Run: crio config
	I1212 20:11:18.650568  306436 cni.go:84] Creating CNI manager for ""
	I1212 20:11:18.650585  306436 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:11:18.650597  306436 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1212 20:11:18.650621  306436 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-832562 NodeName:newest-cni-832562 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:11:18.650797  306436 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-832562"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:11:18.650875  306436 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 20:11:18.659204  306436 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 20:11:18.659264  306436 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 20:11:18.666538  306436 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1212 20:11:18.678253  306436 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 20:11:18.690427  306436 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1212 20:11:18.702522  306436 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1212 20:11:18.705872  306436 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:11:18.715341  306436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:11:18.799933  306436 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:11:18.822260  306436 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/newest-cni-832562 for IP: 192.168.76.2
	I1212 20:11:18.822291  306436 certs.go:195] generating shared ca certs ...
	I1212 20:11:18.822312  306436 certs.go:227] acquiring lock for ca certs: {Name:mk2b2b547b64ae18c808f2dedd3d3cb8fa4a59ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:11:18.822472  306436 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-5703/.minikube/ca.key
	I1212 20:11:18.822539  306436 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.key
	I1212 20:11:18.822556  306436 certs.go:257] generating profile certs ...
	I1212 20:11:18.822665  306436 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/newest-cni-832562/client.key
	I1212 20:11:18.822742  306436 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/newest-cni-832562/apiserver.key.a4f7d03e
	I1212 20:11:18.822794  306436 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/newest-cni-832562/proxy-client.key
	I1212 20:11:18.822940  306436 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/9254.pem (1338 bytes)
	W1212 20:11:18.822988  306436 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-5703/.minikube/certs/9254_empty.pem, impossibly tiny 0 bytes
	I1212 20:11:18.823003  306436 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 20:11:18.823040  306436 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem (1078 bytes)
	I1212 20:11:18.823080  306436 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:11:18.823116  306436 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/key.pem (1679 bytes)
	I1212 20:11:18.823178  306436 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/ssl/certs/92542.pem (1708 bytes)
	I1212 20:11:18.823724  306436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:11:18.841416  306436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 20:11:18.861938  306436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:11:18.880588  306436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 20:11:18.904203  306436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/newest-cni-832562/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 20:11:18.923257  306436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/newest-cni-832562/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 20:11:18.940506  306436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/newest-cni-832562/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:11:18.956851  306436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/newest-cni-832562/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 20:11:18.973739  306436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/ssl/certs/92542.pem --> /usr/share/ca-certificates/92542.pem (1708 bytes)
	I1212 20:11:18.991233  306436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:11:19.009149  306436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/certs/9254.pem --> /usr/share/ca-certificates/9254.pem (1338 bytes)
	I1212 20:11:19.027209  306436 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 20:11:19.039983  306436 ssh_runner.go:195] Run: openssl version
	I1212 20:11:19.046698  306436 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/92542.pem
	I1212 20:11:19.054113  306436 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/92542.pem /etc/ssl/certs/92542.pem
	I1212 20:11:19.062666  306436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92542.pem
	I1212 20:11:19.066186  306436 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:38 /usr/share/ca-certificates/92542.pem
	I1212 20:11:19.066233  306436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92542.pem
	I1212 20:11:19.105711  306436 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 20:11:19.114638  306436 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:11:19.123679  306436 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 20:11:19.131354  306436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:11:19.135466  306436 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:11:19.135523  306436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:11:19.173657  306436 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 20:11:19.182212  306436 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9254.pem
	I1212 20:11:19.190700  306436 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9254.pem /etc/ssl/certs/9254.pem
	I1212 20:11:19.198624  306436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9254.pem
	I1212 20:11:19.202780  306436 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:38 /usr/share/ca-certificates/9254.pem
	I1212 20:11:19.202838  306436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9254.pem
	I1212 20:11:19.246502  306436 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 20:11:19.255168  306436 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:11:19.259717  306436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 20:11:19.313539  306436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 20:11:19.371225  306436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 20:11:19.422739  306436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 20:11:19.470384  306436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 20:11:19.532059  306436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 20:11:19.588061  306436 kubeadm.go:401] StartCluster: {Name:newest-cni-832562 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-832562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:11:19.588158  306436 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:11:19.588214  306436 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:11:19.625669  306436 cri.go:89] found id: "302da1b0b4b49d8184afdd2afaccda38c21edb87a4612a1dc37701a62340f511"
	I1212 20:11:19.625691  306436 cri.go:89] found id: "f0a7c03f08d77407822e1d8f041f02ceb34d3703a2fae8bc8ce0492d7f51f8d1"
	I1212 20:11:19.625696  306436 cri.go:89] found id: "41418d6b64580bd178a2682078ca82622588d0949f2b8a780d7e198c24ad245f"
	I1212 20:11:19.625701  306436 cri.go:89] found id: "cf33221a5bf2511a5c4dcc0fef48a4b8caf2e2b4b846415a5686cd3646cae564"
	I1212 20:11:19.625705  306436 cri.go:89] found id: ""
	I1212 20:11:19.625749  306436 ssh_runner.go:195] Run: sudo runc list -f json
	W1212 20:11:19.638803  306436 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:11:19Z" level=error msg="open /run/runc: no such file or directory"
	I1212 20:11:19.638873  306436 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 20:11:19.647053  306436 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 20:11:19.647070  306436 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 20:11:19.647111  306436 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 20:11:19.654948  306436 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:11:19.655771  306436 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-832562" does not appear in /home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 20:11:19.656483  306436 kubeconfig.go:62] /home/jenkins/minikube-integration/22112-5703/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-832562" cluster setting kubeconfig missing "newest-cni-832562" context setting]
	I1212 20:11:19.657615  306436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/kubeconfig: {Name:mkf945a9079d724e4b1deff5cd122f8b2719dc1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:11:19.659393  306436 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 20:11:19.667192  306436 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1212 20:11:19.667217  306436 kubeadm.go:602] duration metric: took 20.141054ms to restartPrimaryControlPlane
	I1212 20:11:19.667226  306436 kubeadm.go:403] duration metric: took 79.176832ms to StartCluster
	I1212 20:11:19.667240  306436 settings.go:142] acquiring lock: {Name:mk7b47e673a4fe47e1d03b804f21c4b8c19bdcb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:11:19.667307  306436 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 20:11:19.669327  306436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/kubeconfig: {Name:mkf945a9079d724e4b1deff5cd122f8b2719dc1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:11:19.669545  306436 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:11:19.669627  306436 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 20:11:19.669735  306436 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-832562"
	I1212 20:11:19.669753  306436 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-832562"
	W1212 20:11:19.669764  306436 addons.go:248] addon storage-provisioner should already be in state true
	I1212 20:11:19.669770  306436 addons.go:70] Setting dashboard=true in profile "newest-cni-832562"
	I1212 20:11:19.669794  306436 addons.go:239] Setting addon dashboard=true in "newest-cni-832562"
	I1212 20:11:19.669803  306436 host.go:66] Checking if "newest-cni-832562" exists ...
	W1212 20:11:19.669804  306436 addons.go:248] addon dashboard should already be in state true
	I1212 20:11:19.669821  306436 addons.go:70] Setting default-storageclass=true in profile "newest-cni-832562"
	I1212 20:11:19.669845  306436 host.go:66] Checking if "newest-cni-832562" exists ...
	I1212 20:11:19.669855  306436 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-832562"
	I1212 20:11:19.670004  306436 config.go:182] Loaded profile config "newest-cni-832562": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:11:19.670151  306436 cli_runner.go:164] Run: docker container inspect newest-cni-832562 --format={{.State.Status}}
	I1212 20:11:19.670372  306436 cli_runner.go:164] Run: docker container inspect newest-cni-832562 --format={{.State.Status}}
	I1212 20:11:19.670393  306436 cli_runner.go:164] Run: docker container inspect newest-cni-832562 --format={{.State.Status}}
	I1212 20:11:19.671836  306436 out.go:179] * Verifying Kubernetes components...
	I1212 20:11:19.673143  306436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:11:19.696493  306436 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:11:19.696549  306436 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1212 20:11:19.698073  306436 addons.go:239] Setting addon default-storageclass=true in "newest-cni-832562"
	W1212 20:11:19.698091  306436 addons.go:248] addon default-storageclass should already be in state true
	I1212 20:11:19.698117  306436 host.go:66] Checking if "newest-cni-832562" exists ...
	I1212 20:11:19.698299  306436 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:11:19.698320  306436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 20:11:19.698389  306436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:11:19.698714  306436 cli_runner.go:164] Run: docker container inspect newest-cni-832562 --format={{.State.Status}}
	I1212 20:11:19.699611  306436 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1212 20:11:19.007171  301411 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 20:11:19.012044  301411 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1212 20:11:19.012058  301411 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1212 20:11:19.025492  301411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 20:11:19.238269  301411 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 20:11:19.238406  301411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:11:19.238445  301411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-789448 minikube.k8s.io/updated_at=2025_12_12T20_11_19_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300 minikube.k8s.io/name=auto-789448 minikube.k8s.io/primary=true
	I1212 20:11:19.248815  301411 ops.go:34] apiserver oom_adj: -16
	I1212 20:11:19.342028  301411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:11:19.842118  301411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:11:19.700607  306436 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1212 20:11:19.700623  306436 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1212 20:11:19.700681  306436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:11:19.733819  306436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/newest-cni-832562/id_rsa Username:docker}
	I1212 20:11:19.737686  306436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/newest-cni-832562/id_rsa Username:docker}
	I1212 20:11:19.738213  306436 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 20:11:19.738230  306436 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 20:11:19.738825  306436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:11:19.763235  306436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/newest-cni-832562/id_rsa Username:docker}
	I1212 20:11:19.814806  306436 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:11:19.827954  306436 api_server.go:52] waiting for apiserver process to appear ...
	I1212 20:11:19.828021  306436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:11:19.839160  306436 api_server.go:72] duration metric: took 169.583655ms to wait for apiserver process to appear ...
	I1212 20:11:19.839192  306436 api_server.go:88] waiting for apiserver healthz status ...
	I1212 20:11:19.839213  306436 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 20:11:19.851668  306436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:11:19.852695  306436 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1212 20:11:19.852713  306436 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1212 20:11:19.866443  306436 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1212 20:11:19.866463  306436 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1212 20:11:19.872697  306436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:11:19.879944  306436 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1212 20:11:19.879960  306436 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1212 20:11:19.895031  306436 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1212 20:11:19.895047  306436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1212 20:11:19.913465  306436 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1212 20:11:19.913492  306436 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1212 20:11:19.934394  306436 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1212 20:11:19.934434  306436 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1212 20:11:19.948772  306436 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1212 20:11:19.948799  306436 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1212 20:11:19.964030  306436 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1212 20:11:19.964051  306436 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1212 20:11:19.977064  306436 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 20:11:19.977085  306436 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1212 20:11:19.994045  306436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 20:11:20.863984  306436 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 20:11:20.864012  306436 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 20:11:20.864028  306436 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 20:11:20.873597  306436 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 20:11:20.873626  306436 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 20:11:21.339755  306436 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 20:11:21.345549  306436 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 20:11:21.345582  306436 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 20:11:21.486618  306436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.613891926s)
	I1212 20:11:21.486815  306436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.635117346s)
	I1212 20:11:21.486881  306436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.492798287s)
	I1212 20:11:21.488852  306436 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-832562 addons enable metrics-server
	
	I1212 20:11:21.499486  306436 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1212 20:11:17.537795  295304 node_ready.go:57] node "embed-certs-399565" has "Ready":"False" status (will retry)
	W1212 20:11:19.538262  295304 node_ready.go:57] node "embed-certs-399565" has "Ready":"False" status (will retry)
	I1212 20:11:21.500689  306436 addons.go:530] duration metric: took 1.831068212s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1212 20:11:21.839709  306436 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 20:11:21.844672  306436 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 20:11:21.844695  306436 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 20:11:22.340268  306436 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 20:11:22.344845  306436 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1212 20:11:22.345824  306436 api_server.go:141] control plane version: v1.35.0-beta.0
	I1212 20:11:22.345852  306436 api_server.go:131] duration metric: took 2.506651572s to wait for apiserver health ...
	I1212 20:11:22.345863  306436 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 20:11:22.349555  306436 system_pods.go:59] 8 kube-system pods found
	I1212 20:11:22.349589  306436 system_pods.go:61] "coredns-7d764666f9-4762p" [a53ee562-410c-45be-b679-2660aa1e5684] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1212 20:11:22.349603  306436 system_pods.go:61] "etcd-newest-cni-832562" [49c28736-14cd-4e9c-a3a6-f0fd7b64c184] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 20:11:22.349609  306436 system_pods.go:61] "kindnet-zpw2b" [2340f364-5a1b-4ed7-89bc-3c9347238a44] Running
	I1212 20:11:22.349615  306436 system_pods.go:61] "kube-apiserver-newest-cni-832562" [4bafc9d8-689e-4b1d-aa30-d6a7ca78b990] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 20:11:22.349621  306436 system_pods.go:61] "kube-controller-manager-newest-cni-832562" [39096cb8-3644-4518-9f94-ee0bafe5f02a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 20:11:22.349625  306436 system_pods.go:61] "kube-proxy-x67v5" [62e57f5e-f9e9-4a12-8e87-0f95e2e0879d] Running
	I1212 20:11:22.349637  306436 system_pods.go:61] "kube-scheduler-newest-cni-832562" [86b42489-2f0a-46e5-9ebc-e551a2a0aa33] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 20:11:22.349648  306436 system_pods.go:61] "storage-provisioner" [d57bccb6-b89e-405d-ae22-62d444454f02] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1212 20:11:22.349654  306436 system_pods.go:74] duration metric: took 3.784457ms to wait for pod list to return data ...
	I1212 20:11:22.349664  306436 default_sa.go:34] waiting for default service account to be created ...
	I1212 20:11:22.352097  306436 default_sa.go:45] found service account: "default"
	I1212 20:11:22.352118  306436 default_sa.go:55] duration metric: took 2.44826ms for default service account to be created ...
	I1212 20:11:22.352131  306436 kubeadm.go:587] duration metric: took 2.68256s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1212 20:11:22.352152  306436 node_conditions.go:102] verifying NodePressure condition ...
	I1212 20:11:22.354718  306436 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1212 20:11:22.354740  306436 node_conditions.go:123] node cpu capacity is 8
	I1212 20:11:22.354752  306436 node_conditions.go:105] duration metric: took 2.594521ms to run NodePressure ...
	I1212 20:11:22.354766  306436 start.go:242] waiting for startup goroutines ...
	I1212 20:11:22.354775  306436 start.go:247] waiting for cluster config update ...
	I1212 20:11:22.354791  306436 start.go:256] writing updated cluster config ...
	I1212 20:11:22.355051  306436 ssh_runner.go:195] Run: rm -f paused
	I1212 20:11:22.418500  306436 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1212 20:11:22.420410  306436 out.go:179] * Done! kubectl is now configured to use "newest-cni-832562" cluster and "default" namespace by default
	I1212 20:11:20.342571  301411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:11:20.842481  301411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:11:21.342086  301411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:11:21.842876  301411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:11:22.342351  301411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:11:22.842084  301411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:11:23.343094  301411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:11:23.438658  301411 kubeadm.go:1114] duration metric: took 4.200329546s to wait for elevateKubeSystemPrivileges
	I1212 20:11:23.438700  301411 kubeadm.go:403] duration metric: took 15.380616174s to StartCluster
	I1212 20:11:23.438722  301411 settings.go:142] acquiring lock: {Name:mk7b47e673a4fe47e1d03b804f21c4b8c19bdcb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:11:23.438814  301411 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 20:11:23.440749  301411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/kubeconfig: {Name:mkf945a9079d724e4b1deff5cd122f8b2719dc1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:11:23.441006  301411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 20:11:23.441006  301411 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:11:23.441095  301411 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 20:11:23.441174  301411 config.go:182] Loaded profile config "auto-789448": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:11:23.441213  301411 addons.go:70] Setting storage-provisioner=true in profile "auto-789448"
	I1212 20:11:23.441234  301411 addons.go:239] Setting addon storage-provisioner=true in "auto-789448"
	I1212 20:11:23.441249  301411 addons.go:70] Setting default-storageclass=true in profile "auto-789448"
	I1212 20:11:23.441266  301411 host.go:66] Checking if "auto-789448" exists ...
	I1212 20:11:23.441327  301411 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-789448"
	I1212 20:11:23.441697  301411 cli_runner.go:164] Run: docker container inspect auto-789448 --format={{.State.Status}}
	I1212 20:11:23.441865  301411 cli_runner.go:164] Run: docker container inspect auto-789448 --format={{.State.Status}}
	I1212 20:11:23.442425  301411 out.go:179] * Verifying Kubernetes components...
	I1212 20:11:23.443597  301411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:11:23.468338  301411 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:11:23.470077  301411 addons.go:239] Setting addon default-storageclass=true in "auto-789448"
	I1212 20:11:23.470124  301411 host.go:66] Checking if "auto-789448" exists ...
	I1212 20:11:23.470456  301411 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:11:23.470470  301411 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 20:11:23.470519  301411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-789448
	I1212 20:11:23.470618  301411 cli_runner.go:164] Run: docker container inspect auto-789448 --format={{.State.Status}}
	I1212 20:11:23.502649  301411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/auto-789448/id_rsa Username:docker}
	I1212 20:11:23.503421  301411 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 20:11:23.503517  301411 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 20:11:23.503598  301411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-789448
	I1212 20:11:23.527733  301411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/auto-789448/id_rsa Username:docker}
	I1212 20:11:23.572919  301411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 20:11:23.634066  301411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:11:23.664484  301411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:11:23.670838  301411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:11:23.829995  301411 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1212 20:11:23.831866  301411 node_ready.go:35] waiting up to 15m0s for node "auto-789448" to be "Ready" ...
	I1212 20:11:24.010802  301411 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1212 20:11:24.011762  301411 addons.go:530] duration metric: took 570.665845ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1212 20:11:24.335022  301411 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-789448" context rescaled to 1 replicas
	
	
	==> CRI-O <==
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.196901435Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.200213204Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=0acce53a-005d-458b-9c5c-c502ac9e1da0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.200780154Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=df9196bd-29af-4756-bcdb-d9626dff5b95 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.201718035Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.202320606Z" level=info msg="Ran pod sandbox 2ded7466bf5acbdc0fa7469415d3650534bad29720a4944a6eed25dd523606ce with infra container: kube-system/kindnet-zpw2b/POD" id=0acce53a-005d-458b-9c5c-c502ac9e1da0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.202413069Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.203251842Z" level=info msg="Ran pod sandbox 45950668b15aff558dee8dbe3e4a3010379b07f69973797c94f81e655436f6c1 with infra container: kube-system/kube-proxy-x67v5/POD" id=df9196bd-29af-4756-bcdb-d9626dff5b95 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.203392999Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=5c0bd8e4-56b4-439c-93e9-9c116862ac74 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.204359514Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=a603d21d-f1e6-4cce-ad6c-e7e29b8df7ab name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.204366958Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=b957bf60-a217-4adc-9e4d-b29255c9f33a name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.205581153Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=a94c4394-b208-49db-9a4f-3e6aa18b606b name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.205805833Z" level=info msg="Creating container: kube-system/kindnet-zpw2b/kindnet-cni" id=5c9599bf-a698-47d4-a720-0f312dfa5712 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.205899607Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.206701859Z" level=info msg="Creating container: kube-system/kube-proxy-x67v5/kube-proxy" id=0a295639-0b0d-4377-a5c7-a7418a683e20 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.206819586Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.210993058Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.211578552Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.213347209Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.213726562Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.240940787Z" level=info msg="Created container 1a65b654bdace8f48adbaa2ad10141fbcf9cb7ef682a91ba514aeb6d10554697: kube-system/kindnet-zpw2b/kindnet-cni" id=5c9599bf-a698-47d4-a720-0f312dfa5712 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.24173612Z" level=info msg="Starting container: 1a65b654bdace8f48adbaa2ad10141fbcf9cb7ef682a91ba514aeb6d10554697" id=cda72eba-57b1-42e0-bcbe-c1a571d1dd00 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.244173727Z" level=info msg="Started container" PID=1060 containerID=1a65b654bdace8f48adbaa2ad10141fbcf9cb7ef682a91ba514aeb6d10554697 description=kube-system/kindnet-zpw2b/kindnet-cni id=cda72eba-57b1-42e0-bcbe-c1a571d1dd00 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2ded7466bf5acbdc0fa7469415d3650534bad29720a4944a6eed25dd523606ce
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.249301986Z" level=info msg="Created container 20ef95e4dd71f468167540741bdcb99d654b925b1576e557dbd0efeb504685b1: kube-system/kube-proxy-x67v5/kube-proxy" id=0a295639-0b0d-4377-a5c7-a7418a683e20 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.249808487Z" level=info msg="Starting container: 20ef95e4dd71f468167540741bdcb99d654b925b1576e557dbd0efeb504685b1" id=aaa488bb-ebba-4814-ab96-bef64d9b4834 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.252385017Z" level=info msg="Started container" PID=1061 containerID=20ef95e4dd71f468167540741bdcb99d654b925b1576e557dbd0efeb504685b1 description=kube-system/kube-proxy-x67v5/kube-proxy id=aaa488bb-ebba-4814-ab96-bef64d9b4834 name=/runtime.v1.RuntimeService/StartContainer sandboxID=45950668b15aff558dee8dbe3e4a3010379b07f69973797c94f81e655436f6c1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	20ef95e4dd71f       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   5 seconds ago       Running             kube-proxy                1                   45950668b15af       kube-proxy-x67v5                            kube-system
	1a65b654bdace       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   5 seconds ago       Running             kindnet-cni               1                   2ded7466bf5ac       kindnet-zpw2b                               kube-system
	302da1b0b4b49       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   6 seconds ago       Running             kube-apiserver            1                   626c6c89a894d       kube-apiserver-newest-cni-832562            kube-system
	f0a7c03f08d77       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   6 seconds ago       Running             kube-scheduler            1                   d03f6412e288a       kube-scheduler-newest-cni-832562            kube-system
	41418d6b64580       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   6 seconds ago       Running             kube-controller-manager   1                   f115bdf351268       kube-controller-manager-newest-cni-832562   kube-system
	cf33221a5bf25       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   6 seconds ago       Running             etcd                      1                   f56b859dbae4c       etcd-newest-cni-832562                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-832562
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-832562
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300
	                    minikube.k8s.io/name=newest-cni-832562
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T20_11_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 20:10:57 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-832562
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 20:11:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 20:11:20 +0000   Fri, 12 Dec 2025 20:10:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 20:11:20 +0000   Fri, 12 Dec 2025 20:10:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 20:11:20 +0000   Fri, 12 Dec 2025 20:10:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 12 Dec 2025 20:11:20 +0000   Fri, 12 Dec 2025 20:10:55 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-832562
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c9831a2813dcaeb3cc17b596693b7bac
	  System UUID:                02e0f34f-a5d1-439b-8544-2451e32971bb
	  Boot ID:                    50e046d9-33b0-4107-918f-f0a8d9513b10
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-832562                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         27s
	  kube-system                 kindnet-zpw2b                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      19s
	  kube-system                 kube-apiserver-newest-cni-832562             250m (3%)     0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-controller-manager-newest-cni-832562    200m (2%)     0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-proxy-x67v5                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         19s
	  kube-system                 kube-scheduler-newest-cni-832562             100m (1%)     0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  21s   node-controller  Node newest-cni-832562 event: Registered Node newest-cni-832562 in Controller
	  Normal  RegisteredNode  2s    node-controller  Node newest-cni-832562 event: Registered Node newest-cni-832562 in Controller
	
	
	==> dmesg <==
	[  +0.086327] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026088] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.855140] kauditd_printk_skb: 47 callbacks suppressed
	[Dec12 19:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.016395] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023890] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023861] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023912] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023894] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +2.047754] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +4.031589] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +8.448131] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[Dec12 19:32] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[ +32.252714] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	
	
	==> etcd [cf33221a5bf2511a5c4dcc0fef48a4b8caf2e2b4b846415a5686cd3646cae564] <==
	{"level":"warn","ts":"2025-12-12T20:11:20.198678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.205810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.212580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.219571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.228797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.235971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.241993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.248434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.254838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.262343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.270982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.277701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.287434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.294209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.301038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.307467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.314550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.328600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.335334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.341530Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.347926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.355359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.378499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.392464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.445843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36534","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:11:26 up 53 min,  0 user,  load average: 4.54, 2.50, 1.68
	Linux newest-cni-832562 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1a65b654bdace8f48adbaa2ad10141fbcf9cb7ef682a91ba514aeb6d10554697] <==
	I1212 20:11:21.427879       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1212 20:11:21.519225       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1212 20:11:21.519378       1 main.go:148] setting mtu 1500 for CNI 
	I1212 20:11:21.519401       1 main.go:178] kindnetd IP family: "ipv4"
	I1212 20:11:21.519432       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-12T20:11:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1212 20:11:21.631161       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1212 20:11:21.631217       1 controller.go:381] "Waiting for informer caches to sync"
	I1212 20:11:21.631231       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1212 20:11:21.631473       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1212 20:11:22.131735       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1212 20:11:22.131756       1 metrics.go:72] Registering metrics
	I1212 20:11:22.131820       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [302da1b0b4b49d8184afdd2afaccda38c21edb87a4612a1dc37701a62340f511] <==
	I1212 20:11:20.913755       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1212 20:11:20.913860       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1212 20:11:20.914812       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:20.914352       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1212 20:11:20.914863       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:20.914971       1 aggregator.go:187] initial CRD sync complete...
	I1212 20:11:20.915019       1 autoregister_controller.go:144] Starting autoregister controller
	I1212 20:11:20.915098       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 20:11:20.915131       1 cache.go:39] Caches are synced for autoregister controller
	I1212 20:11:20.915801       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:20.925261       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1212 20:11:20.926309       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1212 20:11:20.952521       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1212 20:11:20.973389       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 20:11:21.046599       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1212 20:11:21.256199       1 controller.go:667] quota admission added evaluator for: namespaces
	I1212 20:11:21.288216       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1212 20:11:21.308886       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 20:11:21.316650       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 20:11:21.362185       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.207.127"}
	I1212 20:11:21.378675       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.164.212"}
	I1212 20:11:21.816570       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1212 20:11:24.474028       1 controller.go:667] quota admission added evaluator for: endpoints
	I1212 20:11:24.527181       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1212 20:11:24.676674       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [41418d6b64580bd178a2682078ca82622588d0949f2b8a780d7e198c24ad245f] <==
	I1212 20:11:24.078123       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:24.079451       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:24.079533       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:24.079521       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:24.078135       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:24.079581       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:24.079625       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:24.078131       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:24.079726       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:24.078148       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:24.078137       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:24.079599       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:24.078141       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:24.080260       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:24.080340       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:24.080388       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:24.080429       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:24.080411       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:24.080327       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:24.085321       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:24.087221       1 shared_informer.go:370] "Waiting for caches to sync"
	I1212 20:11:24.179602       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:24.179628       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1212 20:11:24.179634       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1212 20:11:24.187464       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [20ef95e4dd71f468167540741bdcb99d654b925b1576e557dbd0efeb504685b1] <==
	I1212 20:11:21.294489       1 server_linux.go:53] "Using iptables proxy"
	I1212 20:11:21.374500       1 shared_informer.go:370] "Waiting for caches to sync"
	I1212 20:11:21.475444       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:21.475477       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1212 20:11:21.475614       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 20:11:21.499622       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 20:11:21.499683       1 server_linux.go:136] "Using iptables Proxier"
	I1212 20:11:21.505402       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 20:11:21.505795       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1212 20:11:21.505816       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 20:11:21.507328       1 config.go:106] "Starting endpoint slice config controller"
	I1212 20:11:21.507357       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 20:11:21.507388       1 config.go:200] "Starting service config controller"
	I1212 20:11:21.507394       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 20:11:21.507432       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 20:11:21.507444       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 20:11:21.507606       1 config.go:309] "Starting node config controller"
	I1212 20:11:21.507638       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 20:11:21.507652       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 20:11:21.607527       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1212 20:11:21.607547       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1212 20:11:21.607556       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [f0a7c03f08d77407822e1d8f041f02ceb34d3703a2fae8bc8ce0492d7f51f8d1] <==
	I1212 20:11:19.779837       1 serving.go:386] Generated self-signed cert in-memory
	W1212 20:11:20.867961       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1212 20:11:20.868105       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 20:11:20.868121       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1212 20:11:20.868161       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1212 20:11:20.902575       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1212 20:11:20.902611       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 20:11:20.905736       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 20:11:20.905772       1 shared_informer.go:370] "Waiting for caches to sync"
	I1212 20:11:20.905864       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1212 20:11:20.907025       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1212 20:11:21.006892       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 12 20:11:20 newest-cni-832562 kubelet[677]: I1212 20:11:20.930796     677 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-832562"
	Dec 12 20:11:20 newest-cni-832562 kubelet[677]: I1212 20:11:20.930944     677 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-832562"
	Dec 12 20:11:20 newest-cni-832562 kubelet[677]: I1212 20:11:20.955089     677 kubelet_node_status.go:123] "Node was previously registered" node="newest-cni-832562"
	Dec 12 20:11:20 newest-cni-832562 kubelet[677]: I1212 20:11:20.955383     677 kubelet_node_status.go:77] "Successfully registered node" node="newest-cni-832562"
	Dec 12 20:11:20 newest-cni-832562 kubelet[677]: I1212 20:11:20.955432     677 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 12 20:11:20 newest-cni-832562 kubelet[677]: E1212 20:11:20.958141     677 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-832562\" already exists" pod="kube-system/kube-scheduler-newest-cni-832562"
	Dec 12 20:11:20 newest-cni-832562 kubelet[677]: I1212 20:11:20.959017     677 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 12 20:11:20 newest-cni-832562 kubelet[677]: E1212 20:11:20.961405     677 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-832562\" already exists" pod="kube-system/kube-apiserver-newest-cni-832562"
	Dec 12 20:11:20 newest-cni-832562 kubelet[677]: E1212 20:11:20.962172     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-832562" containerName="kube-scheduler"
	Dec 12 20:11:20 newest-cni-832562 kubelet[677]: E1212 20:11:20.961426     677 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-832562\" already exists" pod="kube-system/etcd-newest-cni-832562"
	Dec 12 20:11:20 newest-cni-832562 kubelet[677]: E1212 20:11:20.962771     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-832562" containerName="kube-apiserver"
	Dec 12 20:11:20 newest-cni-832562 kubelet[677]: E1212 20:11:20.963072     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-832562" containerName="etcd"
	Dec 12 20:11:20 newest-cni-832562 kubelet[677]: I1212 20:11:20.991623     677 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 12 20:11:21 newest-cni-832562 kubelet[677]: I1212 20:11:21.044141     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62e57f5e-f9e9-4a12-8e87-0f95e2e0879d-lib-modules\") pod \"kube-proxy-x67v5\" (UID: \"62e57f5e-f9e9-4a12-8e87-0f95e2e0879d\") " pod="kube-system/kube-proxy-x67v5"
	Dec 12 20:11:21 newest-cni-832562 kubelet[677]: I1212 20:11:21.044193     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2340f364-5a1b-4ed7-89bc-3c9347238a44-lib-modules\") pod \"kindnet-zpw2b\" (UID: \"2340f364-5a1b-4ed7-89bc-3c9347238a44\") " pod="kube-system/kindnet-zpw2b"
	Dec 12 20:11:21 newest-cni-832562 kubelet[677]: I1212 20:11:21.044232     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/62e57f5e-f9e9-4a12-8e87-0f95e2e0879d-xtables-lock\") pod \"kube-proxy-x67v5\" (UID: \"62e57f5e-f9e9-4a12-8e87-0f95e2e0879d\") " pod="kube-system/kube-proxy-x67v5"
	Dec 12 20:11:21 newest-cni-832562 kubelet[677]: I1212 20:11:21.044256     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/2340f364-5a1b-4ed7-89bc-3c9347238a44-cni-cfg\") pod \"kindnet-zpw2b\" (UID: \"2340f364-5a1b-4ed7-89bc-3c9347238a44\") " pod="kube-system/kindnet-zpw2b"
	Dec 12 20:11:21 newest-cni-832562 kubelet[677]: I1212 20:11:21.044324     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2340f364-5a1b-4ed7-89bc-3c9347238a44-xtables-lock\") pod \"kindnet-zpw2b\" (UID: \"2340f364-5a1b-4ed7-89bc-3c9347238a44\") " pod="kube-system/kindnet-zpw2b"
	Dec 12 20:11:21 newest-cni-832562 kubelet[677]: E1212 20:11:21.940508     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-832562" containerName="etcd"
	Dec 12 20:11:21 newest-cni-832562 kubelet[677]: E1212 20:11:21.940620     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-832562" containerName="kube-scheduler"
	Dec 12 20:11:21 newest-cni-832562 kubelet[677]: E1212 20:11:21.940979     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-832562" containerName="kube-apiserver"
	Dec 12 20:11:22 newest-cni-832562 kubelet[677]: E1212 20:11:22.942550     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-832562" containerName="kube-scheduler"
	Dec 12 20:11:23 newest-cni-832562 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 12 20:11:23 newest-cni-832562 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 12 20:11:23 newest-cni-832562 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-832562 -n newest-cni-832562
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-832562 -n newest-cni-832562: exit status 2 (318.865281ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-832562 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-4762p storage-provisioner dashboard-metrics-scraper-867fb5f87b-tp6gm kubernetes-dashboard-b84665fb8-l9nc2
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-832562 describe pod coredns-7d764666f9-4762p storage-provisioner dashboard-metrics-scraper-867fb5f87b-tp6gm kubernetes-dashboard-b84665fb8-l9nc2
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-832562 describe pod coredns-7d764666f9-4762p storage-provisioner dashboard-metrics-scraper-867fb5f87b-tp6gm kubernetes-dashboard-b84665fb8-l9nc2: exit status 1 (57.993406ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-4762p" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-tp6gm" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-l9nc2" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-832562 describe pod coredns-7d764666f9-4762p storage-provisioner dashboard-metrics-scraper-867fb5f87b-tp6gm kubernetes-dashboard-b84665fb8-l9nc2: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-832562
helpers_test.go:244: (dbg) docker inspect newest-cni-832562:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2b8b85447870a65328af6bff8a5fc0386a7f78d18530ae3dd075c8b98c68fdcd",
	        "Created": "2025-12-12T20:10:44.178344468Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 306664,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T20:11:12.791871124Z",
	            "FinishedAt": "2025-12-12T20:11:11.916111093Z"
	        },
	        "Image": "sha256:1ca69fb46d667873e297ff8975852d6be818eb624529e750e2b09ff0d3b0c367",
	        "ResolvConfPath": "/var/lib/docker/containers/2b8b85447870a65328af6bff8a5fc0386a7f78d18530ae3dd075c8b98c68fdcd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2b8b85447870a65328af6bff8a5fc0386a7f78d18530ae3dd075c8b98c68fdcd/hostname",
	        "HostsPath": "/var/lib/docker/containers/2b8b85447870a65328af6bff8a5fc0386a7f78d18530ae3dd075c8b98c68fdcd/hosts",
	        "LogPath": "/var/lib/docker/containers/2b8b85447870a65328af6bff8a5fc0386a7f78d18530ae3dd075c8b98c68fdcd/2b8b85447870a65328af6bff8a5fc0386a7f78d18530ae3dd075c8b98c68fdcd-json.log",
	        "Name": "/newest-cni-832562",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-832562:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-832562",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2b8b85447870a65328af6bff8a5fc0386a7f78d18530ae3dd075c8b98c68fdcd",
	                "LowerDir": "/var/lib/docker/overlay2/31f493d46db95581b1e542e90a5e9ebb6d2f9f3cb581088f2c1a7fe49a4c1d63-init/diff:/var/lib/docker/overlay2/87d8f1d6be832394c20ec55eb084b3f4a084ca786856584bd9e90cdb45432d3c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/31f493d46db95581b1e542e90a5e9ebb6d2f9f3cb581088f2c1a7fe49a4c1d63/merged",
	                "UpperDir": "/var/lib/docker/overlay2/31f493d46db95581b1e542e90a5e9ebb6d2f9f3cb581088f2c1a7fe49a4c1d63/diff",
	                "WorkDir": "/var/lib/docker/overlay2/31f493d46db95581b1e542e90a5e9ebb6d2f9f3cb581088f2c1a7fe49a4c1d63/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-832562",
	                "Source": "/var/lib/docker/volumes/newest-cni-832562/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-832562",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-832562",
	                "name.minikube.sigs.k8s.io": "newest-cni-832562",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a73404c8253f2d70c853528bde9a625fc51640e010c48cad84fefe7a3d59c03e",
	            "SandboxKey": "/var/run/docker/netns/a73404c8253f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-832562": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5b0e30eb6e7a6239611b037a06cb38c24c42431a49eddf41a41622bd55f96edd",
	                    "EndpointID": "fad7f7148ce91e2c771942a60c9744047310038477cd15373b6d3d2214e2006f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "7a:3e:77:8d:91:d5",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-832562",
	                        "2b8b85447870"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-832562 -n newest-cni-832562
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-832562 -n newest-cni-832562: exit status 2 (317.271235ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-832562 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ delete  │ -p stopped-upgrade-180826                                                                                                                                                                                                                            │ stopped-upgrade-180826       │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ start   │ -p kubernetes-upgrade-991615 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                    │ kubernetes-upgrade-991615    │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ start   │ -p kubernetes-upgrade-991615 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-991615    │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ start   │ -p default-k8s-diff-port-433034 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-433034 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:11 UTC │
	│ image   │ old-k8s-version-824670 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ pause   │ -p old-k8s-version-824670 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-991615                                                                                                                                                                                                                         │ kubernetes-upgrade-991615    │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ delete  │ -p old-k8s-version-824670                                                                                                                                                                                                                            │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ start   │ -p newest-cni-832562 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-832562            │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:11 UTC │
	│ delete  │ -p old-k8s-version-824670                                                                                                                                                                                                                            │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ delete  │ -p disable-driver-mounts-044739                                                                                                                                                                                                                      │ disable-driver-mounts-044739 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ start   │ -p embed-certs-399565 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-399565           │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:11 UTC │
	│ image   │ no-preload-753103 image list --format=json                                                                                                                                                                                                           │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ pause   │ -p no-preload-753103 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ delete  │ -p no-preload-753103                                                                                                                                                                                                                                 │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ delete  │ -p no-preload-753103                                                                                                                                                                                                                                 │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ start   │ -p auto-789448 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                              │ auto-789448                  │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-832562 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-832562            │ jenkins │ v1.37.0 │ 12 Dec 25 20:11 UTC │                     │
	│ stop    │ -p newest-cni-832562 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-832562            │ jenkins │ v1.37.0 │ 12 Dec 25 20:11 UTC │ 12 Dec 25 20:11 UTC │
	│ addons  │ enable dashboard -p newest-cni-832562 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-832562            │ jenkins │ v1.37.0 │ 12 Dec 25 20:11 UTC │ 12 Dec 25 20:11 UTC │
	│ start   │ -p newest-cni-832562 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-832562            │ jenkins │ v1.37.0 │ 12 Dec 25 20:11 UTC │ 12 Dec 25 20:11 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-433034 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-433034 │ jenkins │ v1.37.0 │ 12 Dec 25 20:11 UTC │                     │
	│ image   │ newest-cni-832562 image list --format=json                                                                                                                                                                                                           │ newest-cni-832562            │ jenkins │ v1.37.0 │ 12 Dec 25 20:11 UTC │ 12 Dec 25 20:11 UTC │
	│ pause   │ -p newest-cni-832562 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-832562            │ jenkins │ v1.37.0 │ 12 Dec 25 20:11 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-433034 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-433034 │ jenkins │ v1.37.0 │ 12 Dec 25 20:11 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:11:12
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:11:12.532737  306436 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:11:12.532986  306436 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:11:12.532995  306436 out.go:374] Setting ErrFile to fd 2...
	I1212 20:11:12.532999  306436 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:11:12.533167  306436 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 20:11:12.533560  306436 out.go:368] Setting JSON to false
	I1212 20:11:12.534675  306436 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3219,"bootTime":1765567053,"procs":306,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 20:11:12.534737  306436 start.go:143] virtualization: kvm guest
	I1212 20:11:12.536616  306436 out.go:179] * [newest-cni-832562] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 20:11:12.537669  306436 notify.go:221] Checking for updates...
	I1212 20:11:12.537685  306436 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:11:12.538838  306436 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:11:12.540254  306436 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 20:11:12.541433  306436 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-5703/.minikube
	I1212 20:11:12.542571  306436 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 20:11:12.543568  306436 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:11:12.544902  306436 config.go:182] Loaded profile config "newest-cni-832562": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:11:12.545459  306436 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:11:12.570695  306436 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1212 20:11:12.570780  306436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:11:12.629539  306436 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-12 20:11:12.618700808 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 20:11:12.629686  306436 docker.go:319] overlay module found
	I1212 20:11:12.631198  306436 out.go:179] * Using the docker driver based on existing profile
	I1212 20:11:11.426717  289770 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-433034" is "Ready"
	I1212 20:11:11.426743  289770 pod_ready.go:86] duration metric: took 384.436254ms for pod "kube-controller-manager-default-k8s-diff-port-433034" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:11:11.625409  289770 pod_ready.go:83] waiting for pod "kube-proxy-tmrrg" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:11:12.026631  289770 pod_ready.go:94] pod "kube-proxy-tmrrg" is "Ready"
	I1212 20:11:12.026656  289770 pod_ready.go:86] duration metric: took 401.222833ms for pod "kube-proxy-tmrrg" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:11:12.227624  289770 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-433034" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:11:12.626733  289770 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-433034" is "Ready"
	I1212 20:11:12.626762  289770 pod_ready.go:86] duration metric: took 399.116059ms for pod "kube-scheduler-default-k8s-diff-port-433034" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:11:12.626778  289770 pod_ready.go:40] duration metric: took 1.604405948s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 20:11:12.686012  289770 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1212 20:11:12.687473  289770 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-433034" cluster and "default" namespace by default
	I1212 20:11:12.632248  306436 start.go:309] selected driver: docker
	I1212 20:11:12.632261  306436 start.go:927] validating driver "docker" against &{Name:newest-cni-832562 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-832562 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:11:12.632407  306436 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:11:12.633116  306436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:11:12.702610  306436 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-12 20:11:12.690486315 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 20:11:12.702958  306436 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1212 20:11:12.702986  306436 cni.go:84] Creating CNI manager for ""
	I1212 20:11:12.703053  306436 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:11:12.703091  306436 start.go:353] cluster config:
	{Name:newest-cni-832562 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-832562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:11:12.704612  306436 out.go:179] * Starting "newest-cni-832562" primary control-plane node in "newest-cni-832562" cluster
	I1212 20:11:12.705863  306436 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 20:11:12.709546  306436 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 20:11:12.710732  306436 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 20:11:12.710861  306436 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1212 20:11:12.710872  306436 cache.go:65] Caching tarball of preloaded images
	I1212 20:11:12.710930  306436 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 20:11:12.711254  306436 preload.go:238] Found /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 20:11:12.711713  306436 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1212 20:11:12.711874  306436 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/newest-cni-832562/config.json ...
	I1212 20:11:12.740476  306436 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 20:11:12.740499  306436 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 20:11:12.740515  306436 cache.go:243] Successfully downloaded all kic artifacts
	I1212 20:11:12.740550  306436 start.go:360] acquireMachinesLock for newest-cni-832562: {Name:mk09681eb0bd95476952ca6616e7bf9ebfe66f0b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:11:12.740607  306436 start.go:364] duration metric: took 36.955µs to acquireMachinesLock for "newest-cni-832562"
	I1212 20:11:12.740626  306436 start.go:96] Skipping create...Using existing machine configuration
	I1212 20:11:12.740633  306436 fix.go:54] fixHost starting: 
	I1212 20:11:12.740922  306436 cli_runner.go:164] Run: docker container inspect newest-cni-832562 --format={{.State.Status}}
	I1212 20:11:12.762201  306436 fix.go:112] recreateIfNeeded on newest-cni-832562: state=Stopped err=<nil>
	W1212 20:11:12.762227  306436 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 20:11:12.160118  295304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:11:12.659749  295304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:11:13.162390  295304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:11:13.247549  295304 kubeadm.go:1114] duration metric: took 4.684925554s to wait for elevateKubeSystemPrivileges
	I1212 20:11:13.247586  295304 kubeadm.go:403] duration metric: took 17.129842196s to StartCluster
	I1212 20:11:13.247609  295304 settings.go:142] acquiring lock: {Name:mk7b47e673a4fe47e1d03b804f21c4b8c19bdcb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:11:13.247674  295304 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 20:11:13.249680  295304 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/kubeconfig: {Name:mkf945a9079d724e4b1deff5cd122f8b2719dc1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:11:13.250021  295304 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 20:11:13.250039  295304 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:11:13.250112  295304 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 20:11:13.250202  295304 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-399565"
	I1212 20:11:13.250219  295304 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-399565"
	I1212 20:11:13.250230  295304 addons.go:70] Setting default-storageclass=true in profile "embed-certs-399565"
	I1212 20:11:13.250240  295304 config.go:182] Loaded profile config "embed-certs-399565": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:11:13.250249  295304 host.go:66] Checking if "embed-certs-399565" exists ...
	I1212 20:11:13.250293  295304 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-399565"
	I1212 20:11:13.250642  295304 cli_runner.go:164] Run: docker container inspect embed-certs-399565 --format={{.State.Status}}
	I1212 20:11:13.251092  295304 cli_runner.go:164] Run: docker container inspect embed-certs-399565 --format={{.State.Status}}
	I1212 20:11:13.251535  295304 out.go:179] * Verifying Kubernetes components...
	I1212 20:11:13.252738  295304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:11:13.278937  295304 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:11:13.280251  295304 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:11:13.280335  295304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 20:11:13.280437  295304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-399565
	I1212 20:11:13.282471  295304 addons.go:239] Setting addon default-storageclass=true in "embed-certs-399565"
	I1212 20:11:13.282550  295304 host.go:66] Checking if "embed-certs-399565" exists ...
	I1212 20:11:13.283069  295304 cli_runner.go:164] Run: docker container inspect embed-certs-399565 --format={{.State.Status}}
	I1212 20:11:13.310936  295304 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/embed-certs-399565/id_rsa Username:docker}
	I1212 20:11:13.314744  295304 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 20:11:13.314765  295304 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 20:11:13.314906  295304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-399565
	I1212 20:11:13.337251  295304 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/embed-certs-399565/id_rsa Username:docker}
	I1212 20:11:13.353114  295304 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 20:11:13.417391  295304 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:11:13.420847  295304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:11:13.448465  295304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:11:13.533431  295304 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1212 20:11:13.534253  295304 node_ready.go:35] waiting up to 6m0s for node "embed-certs-399565" to be "Ready" ...
	I1212 20:11:13.742444  295304 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1212 20:11:13.743388  295304 addons.go:530] duration metric: took 493.280619ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1212 20:11:14.039742  295304 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-399565" context rescaled to 1 replicas
	W1212 20:11:15.537067  295304 node_ready.go:57] node "embed-certs-399565" has "Ready":"False" status (will retry)
	I1212 20:11:12.763744  306436 out.go:252] * Restarting existing docker container for "newest-cni-832562" ...
	I1212 20:11:12.763824  306436 cli_runner.go:164] Run: docker start newest-cni-832562
	I1212 20:11:13.023832  306436 cli_runner.go:164] Run: docker container inspect newest-cni-832562 --format={{.State.Status}}
	I1212 20:11:13.046680  306436 kic.go:430] container "newest-cni-832562" state is running.
	I1212 20:11:13.047113  306436 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-832562
	I1212 20:11:13.068770  306436 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/newest-cni-832562/config.json ...
	I1212 20:11:13.069032  306436 machine.go:94] provisionDockerMachine start ...
	I1212 20:11:13.069098  306436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:11:13.089330  306436 main.go:143] libmachine: Using SSH client type: native
	I1212 20:11:13.089573  306436 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1212 20:11:13.089588  306436 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 20:11:13.090218  306436 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49200->127.0.0.1:33099: read: connection reset by peer
	I1212 20:11:16.222759  306436 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-832562
	
	I1212 20:11:16.222785  306436 ubuntu.go:182] provisioning hostname "newest-cni-832562"
	I1212 20:11:16.222834  306436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:11:16.241438  306436 main.go:143] libmachine: Using SSH client type: native
	I1212 20:11:16.241751  306436 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1212 20:11:16.241768  306436 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-832562 && echo "newest-cni-832562" | sudo tee /etc/hostname
	I1212 20:11:16.380807  306436 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-832562
	
	I1212 20:11:16.380888  306436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:11:16.398960  306436 main.go:143] libmachine: Using SSH client type: native
	I1212 20:11:16.399163  306436 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1212 20:11:16.399179  306436 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-832562' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-832562/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-832562' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:11:16.530634  306436 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:11:16.530659  306436 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-5703/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-5703/.minikube}
	I1212 20:11:16.530681  306436 ubuntu.go:190] setting up certificates
	I1212 20:11:16.530691  306436 provision.go:84] configureAuth start
	I1212 20:11:16.530749  306436 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-832562
	I1212 20:11:16.548912  306436 provision.go:143] copyHostCerts
	I1212 20:11:16.548982  306436 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-5703/.minikube/cert.pem, removing ...
	I1212 20:11:16.548998  306436 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-5703/.minikube/cert.pem
	I1212 20:11:16.549073  306436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-5703/.minikube/cert.pem (1123 bytes)
	I1212 20:11:16.549266  306436 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-5703/.minikube/key.pem, removing ...
	I1212 20:11:16.549294  306436 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-5703/.minikube/key.pem
	I1212 20:11:16.549341  306436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-5703/.minikube/key.pem (1679 bytes)
	I1212 20:11:16.549441  306436 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-5703/.minikube/ca.pem, removing ...
	I1212 20:11:16.549451  306436 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-5703/.minikube/ca.pem
	I1212 20:11:16.549488  306436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-5703/.minikube/ca.pem (1078 bytes)
	I1212 20:11:16.549559  306436 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-5703/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca-key.pem org=jenkins.newest-cni-832562 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-832562]
	I1212 20:11:16.636954  306436 provision.go:177] copyRemoteCerts
	I1212 20:11:16.637013  306436 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:11:16.637053  306436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:11:16.655185  306436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/newest-cni-832562/id_rsa Username:docker}
	I1212 20:11:16.749983  306436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 20:11:16.766255  306436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 20:11:16.782372  306436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 20:11:16.798676  306436 provision.go:87] duration metric: took 267.965188ms to configureAuth
	I1212 20:11:16.798702  306436 ubuntu.go:206] setting minikube options for container-runtime
	I1212 20:11:16.798853  306436 config.go:182] Loaded profile config "newest-cni-832562": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:11:16.798944  306436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:11:16.816825  306436 main.go:143] libmachine: Using SSH client type: native
	I1212 20:11:16.817017  306436 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1212 20:11:16.817034  306436 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 20:11:17.128705  306436 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 20:11:17.128730  306436 machine.go:97] duration metric: took 4.059681977s to provisionDockerMachine
	I1212 20:11:17.128745  306436 start.go:293] postStartSetup for "newest-cni-832562" (driver="docker")
	I1212 20:11:17.128761  306436 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:11:17.128838  306436 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:11:17.128884  306436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:11:17.149194  306436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/newest-cni-832562/id_rsa Username:docker}
	I1212 20:11:17.250736  306436 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:11:17.254854  306436 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 20:11:17.254886  306436 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 20:11:17.254899  306436 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-5703/.minikube/addons for local assets ...
	I1212 20:11:17.254950  306436 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-5703/.minikube/files for local assets ...
	I1212 20:11:17.255040  306436 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/ssl/certs/92542.pem -> 92542.pem in /etc/ssl/certs
	I1212 20:11:17.255125  306436 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 20:11:17.264376  306436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/ssl/certs/92542.pem --> /etc/ssl/certs/92542.pem (1708 bytes)
	I1212 20:11:17.284781  306436 start.go:296] duration metric: took 156.020863ms for postStartSetup
	I1212 20:11:17.284867  306436 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:11:17.284913  306436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:11:17.305853  306436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/newest-cni-832562/id_rsa Username:docker}
	I1212 20:11:17.402518  306436 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 20:11:17.406823  306436 fix.go:56] duration metric: took 4.666186138s for fixHost
	I1212 20:11:17.406851  306436 start.go:83] releasing machines lock for "newest-cni-832562", held for 4.666234992s
	I1212 20:11:17.406917  306436 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-832562
	I1212 20:11:17.424782  306436 ssh_runner.go:195] Run: cat /version.json
	I1212 20:11:17.424802  306436 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 20:11:17.424837  306436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:11:17.424858  306436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:11:17.442825  306436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/newest-cni-832562/id_rsa Username:docker}
	I1212 20:11:17.443862  306436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/newest-cni-832562/id_rsa Username:docker}
	I1212 20:11:17.534872  306436 ssh_runner.go:195] Run: systemctl --version
	I1212 20:11:17.600181  306436 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 20:11:17.641146  306436 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 20:11:17.646092  306436 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:11:17.646167  306436 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:11:17.654310  306436 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 20:11:17.654332  306436 start.go:496] detecting cgroup driver to use...
	I1212 20:11:17.654363  306436 detect.go:190] detected "systemd" cgroup driver on host os
	I1212 20:11:17.654404  306436 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:11:17.669500  306436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:11:17.681081  306436 docker.go:218] disabling cri-docker service (if available) ...
	I1212 20:11:17.681134  306436 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 20:11:17.694386  306436 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 20:11:17.705620  306436 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 20:11:17.784550  306436 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 20:11:17.861606  306436 docker.go:234] disabling docker service ...
	I1212 20:11:17.861656  306436 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 20:11:17.875336  306436 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 20:11:17.888438  306436 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 20:11:17.971427  306436 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 20:11:18.073018  306436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:11:18.084838  306436 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:11:18.098527  306436 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 20:11:18.098580  306436 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:11:18.107046  306436 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1212 20:11:18.107111  306436 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:11:18.116104  306436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:11:18.124558  306436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:11:18.132638  306436 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 20:11:18.140230  306436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:11:18.149072  306436 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:11:18.158072  306436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:11:18.168037  306436 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 20:11:18.176007  306436 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 20:11:18.183229  306436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:11:18.288050  306436 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 20:11:18.427103  306436 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 20:11:18.427179  306436 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 20:11:18.431178  306436 start.go:564] Will wait 60s for crictl version
	I1212 20:11:18.431236  306436 ssh_runner.go:195] Run: which crictl
	I1212 20:11:18.434958  306436 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 20:11:18.459474  306436 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 20:11:18.459547  306436 ssh_runner.go:195] Run: crio --version
	I1212 20:11:18.486435  306436 ssh_runner.go:195] Run: crio --version
	I1212 20:11:18.514372  306436 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1212 20:11:18.515327  306436 cli_runner.go:164] Run: docker network inspect newest-cni-832562 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:11:18.531943  306436 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1212 20:11:18.536350  306436 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:11:18.548096  306436 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1212 20:11:18.991510  301411 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1212 20:11:18.991612  301411 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 20:11:18.991704  301411 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 20:11:18.991752  301411 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1212 20:11:18.991819  301411 kubeadm.go:319] OS: Linux
	I1212 20:11:18.991896  301411 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 20:11:18.991940  301411 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 20:11:18.991989  301411 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 20:11:18.992047  301411 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 20:11:18.992141  301411 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 20:11:18.992226  301411 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 20:11:18.992354  301411 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 20:11:18.992466  301411 kubeadm.go:319] CGROUPS_IO: enabled
	I1212 20:11:18.992570  301411 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 20:11:18.992682  301411 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 20:11:18.992765  301411 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 20:11:18.992819  301411 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 20:11:18.994628  301411 out.go:252]   - Generating certificates and keys ...
	I1212 20:11:18.994711  301411 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 20:11:18.994809  301411 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 20:11:18.994900  301411 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 20:11:18.994976  301411 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 20:11:18.995071  301411 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 20:11:18.995158  301411 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 20:11:18.995244  301411 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 20:11:18.995445  301411 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-789448 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1212 20:11:18.995531  301411 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 20:11:18.995672  301411 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-789448 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1212 20:11:18.995783  301411 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 20:11:18.995852  301411 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 20:11:18.995893  301411 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 20:11:18.995963  301411 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 20:11:18.996022  301411 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 20:11:18.996090  301411 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 20:11:18.996165  301411 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 20:11:18.996286  301411 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 20:11:18.996370  301411 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 20:11:18.996501  301411 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 20:11:18.996557  301411 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 20:11:18.997850  301411 out.go:252]   - Booting up control plane ...
	I1212 20:11:18.997970  301411 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 20:11:18.998091  301411 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 20:11:18.998188  301411 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 20:11:18.998364  301411 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:11:18.998473  301411 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 20:11:18.998564  301411 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 20:11:18.998691  301411 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:11:18.998761  301411 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 20:11:18.998930  301411 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 20:11:18.999095  301411 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 20:11:18.999181  301411 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.001160152s
	I1212 20:11:18.999321  301411 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1212 20:11:18.999437  301411 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1212 20:11:18.999573  301411 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1212 20:11:18.999679  301411 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1212 20:11:18.999786  301411 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.861418367s
	I1212 20:11:18.999870  301411 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.421043224s
	I1212 20:11:18.999967  301411 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501427281s
	I1212 20:11:19.000092  301411 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 20:11:19.000238  301411 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 20:11:19.000296  301411 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 20:11:19.000475  301411 kubeadm.go:319] [mark-control-plane] Marking the node auto-789448 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 20:11:19.000557  301411 kubeadm.go:319] [bootstrap-token] Using token: 37si91.mktn1vtsbp7n8vf2
	I1212 20:11:19.001847  301411 out.go:252]   - Configuring RBAC rules ...
	I1212 20:11:19.001969  301411 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 20:11:19.002045  301411 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 20:11:19.002169  301411 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 20:11:19.002361  301411 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 20:11:19.002516  301411 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 20:11:19.002620  301411 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 20:11:19.002758  301411 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 20:11:19.002838  301411 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1212 20:11:19.002907  301411 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1212 20:11:19.002924  301411 kubeadm.go:319] 
	I1212 20:11:19.003018  301411 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1212 20:11:19.003027  301411 kubeadm.go:319] 
	I1212 20:11:19.003143  301411 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1212 20:11:19.003156  301411 kubeadm.go:319] 
	I1212 20:11:19.003198  301411 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1212 20:11:19.003296  301411 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 20:11:19.003377  301411 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 20:11:19.003393  301411 kubeadm.go:319] 
	I1212 20:11:19.003453  301411 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1212 20:11:19.003463  301411 kubeadm.go:319] 
	I1212 20:11:19.003508  301411 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 20:11:19.003514  301411 kubeadm.go:319] 
	I1212 20:11:19.003573  301411 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1212 20:11:19.003682  301411 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 20:11:19.003798  301411 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 20:11:19.003807  301411 kubeadm.go:319] 
	I1212 20:11:19.003932  301411 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 20:11:19.004037  301411 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1212 20:11:19.004060  301411 kubeadm.go:319] 
	I1212 20:11:19.004167  301411 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 37si91.mktn1vtsbp7n8vf2 \
	I1212 20:11:19.004303  301411 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c49fb95e06ac971f3e9f3fbce39707c55b5d19bac31f7f0c0750449d5e9f38c \
	I1212 20:11:19.004334  301411 kubeadm.go:319] 	--control-plane 
	I1212 20:11:19.004343  301411 kubeadm.go:319] 
	I1212 20:11:19.004443  301411 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1212 20:11:19.004454  301411 kubeadm.go:319] 
	I1212 20:11:19.004561  301411 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 37si91.mktn1vtsbp7n8vf2 \
	I1212 20:11:19.004687  301411 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c49fb95e06ac971f3e9f3fbce39707c55b5d19bac31f7f0c0750449d5e9f38c 
	I1212 20:11:19.004700  301411 cni.go:84] Creating CNI manager for ""
	I1212 20:11:19.004709  301411 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:11:19.006102  301411 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1212 20:11:18.549351  306436 kubeadm.go:884] updating cluster {Name:newest-cni-832562 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-832562 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 20:11:18.549491  306436 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 20:11:18.549552  306436 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:11:18.581474  306436 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:11:18.581492  306436 crio.go:433] Images already preloaded, skipping extraction
	I1212 20:11:18.581529  306436 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:11:18.606848  306436 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:11:18.606866  306436 cache_images.go:86] Images are preloaded, skipping loading
	I1212 20:11:18.606879  306436 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1212 20:11:18.606969  306436 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-832562 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-832562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 20:11:18.607028  306436 ssh_runner.go:195] Run: crio config
	I1212 20:11:18.650568  306436 cni.go:84] Creating CNI manager for ""
	I1212 20:11:18.650585  306436 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:11:18.650597  306436 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1212 20:11:18.650621  306436 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-832562 NodeName:newest-cni-832562 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:11:18.650797  306436 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-832562"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:11:18.650875  306436 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 20:11:18.659204  306436 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 20:11:18.659264  306436 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 20:11:18.666538  306436 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1212 20:11:18.678253  306436 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 20:11:18.690427  306436 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1212 20:11:18.702522  306436 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1212 20:11:18.705872  306436 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:11:18.715341  306436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:11:18.799933  306436 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:11:18.822260  306436 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/newest-cni-832562 for IP: 192.168.76.2
	I1212 20:11:18.822291  306436 certs.go:195] generating shared ca certs ...
	I1212 20:11:18.822312  306436 certs.go:227] acquiring lock for ca certs: {Name:mk2b2b547b64ae18c808f2dedd3d3cb8fa4a59ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:11:18.822472  306436 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-5703/.minikube/ca.key
	I1212 20:11:18.822539  306436 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.key
	I1212 20:11:18.822556  306436 certs.go:257] generating profile certs ...
	I1212 20:11:18.822665  306436 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/newest-cni-832562/client.key
	I1212 20:11:18.822742  306436 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/newest-cni-832562/apiserver.key.a4f7d03e
	I1212 20:11:18.822794  306436 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/newest-cni-832562/proxy-client.key
	I1212 20:11:18.822940  306436 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/9254.pem (1338 bytes)
	W1212 20:11:18.822988  306436 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-5703/.minikube/certs/9254_empty.pem, impossibly tiny 0 bytes
	I1212 20:11:18.823003  306436 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 20:11:18.823040  306436 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem (1078 bytes)
	I1212 20:11:18.823080  306436 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:11:18.823116  306436 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/key.pem (1679 bytes)
	I1212 20:11:18.823178  306436 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/ssl/certs/92542.pem (1708 bytes)
	I1212 20:11:18.823724  306436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:11:18.841416  306436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 20:11:18.861938  306436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:11:18.880588  306436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 20:11:18.904203  306436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/newest-cni-832562/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 20:11:18.923257  306436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/newest-cni-832562/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 20:11:18.940506  306436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/newest-cni-832562/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:11:18.956851  306436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/newest-cni-832562/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 20:11:18.973739  306436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/ssl/certs/92542.pem --> /usr/share/ca-certificates/92542.pem (1708 bytes)
	I1212 20:11:18.991233  306436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:11:19.009149  306436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/certs/9254.pem --> /usr/share/ca-certificates/9254.pem (1338 bytes)
	I1212 20:11:19.027209  306436 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 20:11:19.039983  306436 ssh_runner.go:195] Run: openssl version
	I1212 20:11:19.046698  306436 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/92542.pem
	I1212 20:11:19.054113  306436 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/92542.pem /etc/ssl/certs/92542.pem
	I1212 20:11:19.062666  306436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92542.pem
	I1212 20:11:19.066186  306436 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:38 /usr/share/ca-certificates/92542.pem
	I1212 20:11:19.066233  306436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92542.pem
	I1212 20:11:19.105711  306436 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 20:11:19.114638  306436 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:11:19.123679  306436 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 20:11:19.131354  306436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:11:19.135466  306436 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:11:19.135523  306436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:11:19.173657  306436 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 20:11:19.182212  306436 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9254.pem
	I1212 20:11:19.190700  306436 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9254.pem /etc/ssl/certs/9254.pem
	I1212 20:11:19.198624  306436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9254.pem
	I1212 20:11:19.202780  306436 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:38 /usr/share/ca-certificates/9254.pem
	I1212 20:11:19.202838  306436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9254.pem
	I1212 20:11:19.246502  306436 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 20:11:19.255168  306436 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:11:19.259717  306436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 20:11:19.313539  306436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 20:11:19.371225  306436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 20:11:19.422739  306436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 20:11:19.470384  306436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 20:11:19.532059  306436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 20:11:19.588061  306436 kubeadm.go:401] StartCluster: {Name:newest-cni-832562 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-832562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:11:19.588158  306436 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:11:19.588214  306436 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:11:19.625669  306436 cri.go:89] found id: "302da1b0b4b49d8184afdd2afaccda38c21edb87a4612a1dc37701a62340f511"
	I1212 20:11:19.625691  306436 cri.go:89] found id: "f0a7c03f08d77407822e1d8f041f02ceb34d3703a2fae8bc8ce0492d7f51f8d1"
	I1212 20:11:19.625696  306436 cri.go:89] found id: "41418d6b64580bd178a2682078ca82622588d0949f2b8a780d7e198c24ad245f"
	I1212 20:11:19.625701  306436 cri.go:89] found id: "cf33221a5bf2511a5c4dcc0fef48a4b8caf2e2b4b846415a5686cd3646cae564"
	I1212 20:11:19.625705  306436 cri.go:89] found id: ""
	I1212 20:11:19.625749  306436 ssh_runner.go:195] Run: sudo runc list -f json
	W1212 20:11:19.638803  306436 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:11:19Z" level=error msg="open /run/runc: no such file or directory"
	I1212 20:11:19.638873  306436 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 20:11:19.647053  306436 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 20:11:19.647070  306436 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 20:11:19.647111  306436 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 20:11:19.654948  306436 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:11:19.655771  306436 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-832562" does not appear in /home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 20:11:19.656483  306436 kubeconfig.go:62] /home/jenkins/minikube-integration/22112-5703/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-832562" cluster setting kubeconfig missing "newest-cni-832562" context setting]
	I1212 20:11:19.657615  306436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/kubeconfig: {Name:mkf945a9079d724e4b1deff5cd122f8b2719dc1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:11:19.659393  306436 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 20:11:19.667192  306436 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1212 20:11:19.667217  306436 kubeadm.go:602] duration metric: took 20.141054ms to restartPrimaryControlPlane
	I1212 20:11:19.667226  306436 kubeadm.go:403] duration metric: took 79.176832ms to StartCluster
	I1212 20:11:19.667240  306436 settings.go:142] acquiring lock: {Name:mk7b47e673a4fe47e1d03b804f21c4b8c19bdcb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:11:19.667307  306436 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 20:11:19.669327  306436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/kubeconfig: {Name:mkf945a9079d724e4b1deff5cd122f8b2719dc1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:11:19.669545  306436 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:11:19.669627  306436 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 20:11:19.669735  306436 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-832562"
	I1212 20:11:19.669753  306436 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-832562"
	W1212 20:11:19.669764  306436 addons.go:248] addon storage-provisioner should already be in state true
	I1212 20:11:19.669770  306436 addons.go:70] Setting dashboard=true in profile "newest-cni-832562"
	I1212 20:11:19.669794  306436 addons.go:239] Setting addon dashboard=true in "newest-cni-832562"
	I1212 20:11:19.669803  306436 host.go:66] Checking if "newest-cni-832562" exists ...
	W1212 20:11:19.669804  306436 addons.go:248] addon dashboard should already be in state true
	I1212 20:11:19.669821  306436 addons.go:70] Setting default-storageclass=true in profile "newest-cni-832562"
	I1212 20:11:19.669845  306436 host.go:66] Checking if "newest-cni-832562" exists ...
	I1212 20:11:19.669855  306436 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-832562"
	I1212 20:11:19.670004  306436 config.go:182] Loaded profile config "newest-cni-832562": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:11:19.670151  306436 cli_runner.go:164] Run: docker container inspect newest-cni-832562 --format={{.State.Status}}
	I1212 20:11:19.670372  306436 cli_runner.go:164] Run: docker container inspect newest-cni-832562 --format={{.State.Status}}
	I1212 20:11:19.670393  306436 cli_runner.go:164] Run: docker container inspect newest-cni-832562 --format={{.State.Status}}
	I1212 20:11:19.671836  306436 out.go:179] * Verifying Kubernetes components...
	I1212 20:11:19.673143  306436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:11:19.696493  306436 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:11:19.696549  306436 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1212 20:11:19.698073  306436 addons.go:239] Setting addon default-storageclass=true in "newest-cni-832562"
	W1212 20:11:19.698091  306436 addons.go:248] addon default-storageclass should already be in state true
	I1212 20:11:19.698117  306436 host.go:66] Checking if "newest-cni-832562" exists ...
	I1212 20:11:19.698299  306436 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:11:19.698320  306436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 20:11:19.698389  306436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:11:19.698714  306436 cli_runner.go:164] Run: docker container inspect newest-cni-832562 --format={{.State.Status}}
	I1212 20:11:19.699611  306436 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1212 20:11:19.007171  301411 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 20:11:19.012044  301411 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1212 20:11:19.012058  301411 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1212 20:11:19.025492  301411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 20:11:19.238269  301411 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 20:11:19.238406  301411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:11:19.238445  301411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-789448 minikube.k8s.io/updated_at=2025_12_12T20_11_19_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300 minikube.k8s.io/name=auto-789448 minikube.k8s.io/primary=true
	I1212 20:11:19.248815  301411 ops.go:34] apiserver oom_adj: -16
	I1212 20:11:19.342028  301411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:11:19.842118  301411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:11:19.700607  306436 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1212 20:11:19.700623  306436 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1212 20:11:19.700681  306436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:11:19.733819  306436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/newest-cni-832562/id_rsa Username:docker}
	I1212 20:11:19.737686  306436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/newest-cni-832562/id_rsa Username:docker}
	I1212 20:11:19.738213  306436 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 20:11:19.738230  306436 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 20:11:19.738825  306436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-832562
	I1212 20:11:19.763235  306436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/newest-cni-832562/id_rsa Username:docker}
	I1212 20:11:19.814806  306436 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:11:19.827954  306436 api_server.go:52] waiting for apiserver process to appear ...
	I1212 20:11:19.828021  306436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:11:19.839160  306436 api_server.go:72] duration metric: took 169.583655ms to wait for apiserver process to appear ...
	I1212 20:11:19.839192  306436 api_server.go:88] waiting for apiserver healthz status ...
	I1212 20:11:19.839213  306436 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 20:11:19.851668  306436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:11:19.852695  306436 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1212 20:11:19.852713  306436 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1212 20:11:19.866443  306436 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1212 20:11:19.866463  306436 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1212 20:11:19.872697  306436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:11:19.879944  306436 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1212 20:11:19.879960  306436 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1212 20:11:19.895031  306436 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1212 20:11:19.895047  306436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1212 20:11:19.913465  306436 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1212 20:11:19.913492  306436 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1212 20:11:19.934394  306436 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1212 20:11:19.934434  306436 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1212 20:11:19.948772  306436 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1212 20:11:19.948799  306436 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1212 20:11:19.964030  306436 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1212 20:11:19.964051  306436 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1212 20:11:19.977064  306436 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 20:11:19.977085  306436 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1212 20:11:19.994045  306436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 20:11:20.863984  306436 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 20:11:20.864012  306436 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 20:11:20.864028  306436 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 20:11:20.873597  306436 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 20:11:20.873626  306436 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 20:11:21.339755  306436 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 20:11:21.345549  306436 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 20:11:21.345582  306436 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 20:11:21.486618  306436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.613891926s)
	I1212 20:11:21.486815  306436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.635117346s)
	I1212 20:11:21.486881  306436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.492798287s)
	I1212 20:11:21.488852  306436 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-832562 addons enable metrics-server
	
	I1212 20:11:21.499486  306436 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1212 20:11:17.537795  295304 node_ready.go:57] node "embed-certs-399565" has "Ready":"False" status (will retry)
	W1212 20:11:19.538262  295304 node_ready.go:57] node "embed-certs-399565" has "Ready":"False" status (will retry)
	I1212 20:11:21.500689  306436 addons.go:530] duration metric: took 1.831068212s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1212 20:11:21.839709  306436 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 20:11:21.844672  306436 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 20:11:21.844695  306436 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 20:11:22.340268  306436 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 20:11:22.344845  306436 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1212 20:11:22.345824  306436 api_server.go:141] control plane version: v1.35.0-beta.0
	I1212 20:11:22.345852  306436 api_server.go:131] duration metric: took 2.506651572s to wait for apiserver health ...
	I1212 20:11:22.345863  306436 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 20:11:22.349555  306436 system_pods.go:59] 8 kube-system pods found
	I1212 20:11:22.349589  306436 system_pods.go:61] "coredns-7d764666f9-4762p" [a53ee562-410c-45be-b679-2660aa1e5684] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1212 20:11:22.349603  306436 system_pods.go:61] "etcd-newest-cni-832562" [49c28736-14cd-4e9c-a3a6-f0fd7b64c184] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 20:11:22.349609  306436 system_pods.go:61] "kindnet-zpw2b" [2340f364-5a1b-4ed7-89bc-3c9347238a44] Running
	I1212 20:11:22.349615  306436 system_pods.go:61] "kube-apiserver-newest-cni-832562" [4bafc9d8-689e-4b1d-aa30-d6a7ca78b990] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 20:11:22.349621  306436 system_pods.go:61] "kube-controller-manager-newest-cni-832562" [39096cb8-3644-4518-9f94-ee0bafe5f02a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 20:11:22.349625  306436 system_pods.go:61] "kube-proxy-x67v5" [62e57f5e-f9e9-4a12-8e87-0f95e2e0879d] Running
	I1212 20:11:22.349637  306436 system_pods.go:61] "kube-scheduler-newest-cni-832562" [86b42489-2f0a-46e5-9ebc-e551a2a0aa33] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 20:11:22.349648  306436 system_pods.go:61] "storage-provisioner" [d57bccb6-b89e-405d-ae22-62d444454f02] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1212 20:11:22.349654  306436 system_pods.go:74] duration metric: took 3.784457ms to wait for pod list to return data ...
	I1212 20:11:22.349664  306436 default_sa.go:34] waiting for default service account to be created ...
	I1212 20:11:22.352097  306436 default_sa.go:45] found service account: "default"
	I1212 20:11:22.352118  306436 default_sa.go:55] duration metric: took 2.44826ms for default service account to be created ...
	I1212 20:11:22.352131  306436 kubeadm.go:587] duration metric: took 2.68256s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1212 20:11:22.352152  306436 node_conditions.go:102] verifying NodePressure condition ...
	I1212 20:11:22.354718  306436 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1212 20:11:22.354740  306436 node_conditions.go:123] node cpu capacity is 8
	I1212 20:11:22.354752  306436 node_conditions.go:105] duration metric: took 2.594521ms to run NodePressure ...
	I1212 20:11:22.354766  306436 start.go:242] waiting for startup goroutines ...
	I1212 20:11:22.354775  306436 start.go:247] waiting for cluster config update ...
	I1212 20:11:22.354791  306436 start.go:256] writing updated cluster config ...
	I1212 20:11:22.355051  306436 ssh_runner.go:195] Run: rm -f paused
	I1212 20:11:22.418500  306436 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1212 20:11:22.420410  306436 out.go:179] * Done! kubectl is now configured to use "newest-cni-832562" cluster and "default" namespace by default
	I1212 20:11:20.342571  301411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:11:20.842481  301411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:11:21.342086  301411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:11:21.842876  301411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:11:22.342351  301411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:11:22.842084  301411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:11:23.343094  301411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:11:23.438658  301411 kubeadm.go:1114] duration metric: took 4.200329546s to wait for elevateKubeSystemPrivileges
	I1212 20:11:23.438700  301411 kubeadm.go:403] duration metric: took 15.380616174s to StartCluster
	I1212 20:11:23.438722  301411 settings.go:142] acquiring lock: {Name:mk7b47e673a4fe47e1d03b804f21c4b8c19bdcb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:11:23.438814  301411 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 20:11:23.440749  301411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/kubeconfig: {Name:mkf945a9079d724e4b1deff5cd122f8b2719dc1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:11:23.441006  301411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 20:11:23.441006  301411 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:11:23.441095  301411 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 20:11:23.441174  301411 config.go:182] Loaded profile config "auto-789448": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:11:23.441213  301411 addons.go:70] Setting storage-provisioner=true in profile "auto-789448"
	I1212 20:11:23.441234  301411 addons.go:239] Setting addon storage-provisioner=true in "auto-789448"
	I1212 20:11:23.441249  301411 addons.go:70] Setting default-storageclass=true in profile "auto-789448"
	I1212 20:11:23.441266  301411 host.go:66] Checking if "auto-789448" exists ...
	I1212 20:11:23.441327  301411 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-789448"
	I1212 20:11:23.441697  301411 cli_runner.go:164] Run: docker container inspect auto-789448 --format={{.State.Status}}
	I1212 20:11:23.441865  301411 cli_runner.go:164] Run: docker container inspect auto-789448 --format={{.State.Status}}
	I1212 20:11:23.442425  301411 out.go:179] * Verifying Kubernetes components...
	I1212 20:11:23.443597  301411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:11:23.468338  301411 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:11:23.470077  301411 addons.go:239] Setting addon default-storageclass=true in "auto-789448"
	I1212 20:11:23.470124  301411 host.go:66] Checking if "auto-789448" exists ...
	I1212 20:11:23.470456  301411 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:11:23.470470  301411 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 20:11:23.470519  301411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-789448
	I1212 20:11:23.470618  301411 cli_runner.go:164] Run: docker container inspect auto-789448 --format={{.State.Status}}
	I1212 20:11:23.502649  301411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/auto-789448/id_rsa Username:docker}
	I1212 20:11:23.503421  301411 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 20:11:23.503517  301411 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 20:11:23.503598  301411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-789448
	I1212 20:11:23.527733  301411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/auto-789448/id_rsa Username:docker}
	I1212 20:11:23.572919  301411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 20:11:23.634066  301411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:11:23.664484  301411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:11:23.670838  301411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:11:23.829995  301411 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1212 20:11:23.831866  301411 node_ready.go:35] waiting up to 15m0s for node "auto-789448" to be "Ready" ...
	I1212 20:11:24.010802  301411 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1212 20:11:24.011762  301411 addons.go:530] duration metric: took 570.665845ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1212 20:11:24.335022  301411 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-789448" context rescaled to 1 replicas
	W1212 20:11:22.037530  295304 node_ready.go:57] node "embed-certs-399565" has "Ready":"False" status (will retry)
	W1212 20:11:24.037904  295304 node_ready.go:57] node "embed-certs-399565" has "Ready":"False" status (will retry)
	I1212 20:11:24.538643  295304 node_ready.go:49] node "embed-certs-399565" is "Ready"
	I1212 20:11:24.538679  295304 node_ready.go:38] duration metric: took 11.004361204s for node "embed-certs-399565" to be "Ready" ...
	I1212 20:11:24.538695  295304 api_server.go:52] waiting for apiserver process to appear ...
	I1212 20:11:24.538750  295304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:11:24.555662  295304 api_server.go:72] duration metric: took 11.305580102s to wait for apiserver process to appear ...
	I1212 20:11:24.555693  295304 api_server.go:88] waiting for apiserver healthz status ...
	I1212 20:11:24.555729  295304 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1212 20:11:24.568618  295304 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1212 20:11:24.571976  295304 api_server.go:141] control plane version: v1.34.2
	I1212 20:11:24.572051  295304 api_server.go:131] duration metric: took 16.349531ms to wait for apiserver health ...
	I1212 20:11:24.572081  295304 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 20:11:24.581704  295304 system_pods.go:59] 8 kube-system pods found
	I1212 20:11:24.581801  295304 system_pods.go:61] "coredns-66bc5c9577-zg2v9" [8b0daa17-68a0-4f3f-b50c-114a8218c542] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 20:11:24.581843  295304 system_pods.go:61] "etcd-embed-certs-399565" [ba75b498-a50f-48ae-9e09-c928ba04794f] Running
	I1212 20:11:24.581874  295304 system_pods.go:61] "kindnet-5fbmr" [6c2a5685-5864-4af2-a1ef-5f355fd2a95b] Running
	I1212 20:11:24.581883  295304 system_pods.go:61] "kube-apiserver-embed-certs-399565" [8850ea17-2667-403a-af36-83cdefa2548a] Running
	I1212 20:11:24.581889  295304 system_pods.go:61] "kube-controller-manager-embed-certs-399565" [5e04b62d-f4fd-4664-aee8-e9b0a4b015f0] Running
	I1212 20:11:24.581894  295304 system_pods.go:61] "kube-proxy-xgs9b" [82692b91-abfa-4ef0-915d-af7f57048d82] Running
	I1212 20:11:24.581899  295304 system_pods.go:61] "kube-scheduler-embed-certs-399565" [3f9b76ad-c6b0-4de4-86ad-2ca8b4fee658] Running
	I1212 20:11:24.581907  295304 system_pods.go:61] "storage-provisioner" [970ffc0a-f3a7-4981-a59e-f47762e9d53e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 20:11:24.581915  295304 system_pods.go:74] duration metric: took 9.817831ms to wait for pod list to return data ...
	I1212 20:11:24.581925  295304 default_sa.go:34] waiting for default service account to be created ...
	I1212 20:11:24.586018  295304 default_sa.go:45] found service account: "default"
	I1212 20:11:24.586045  295304 default_sa.go:55] duration metric: took 4.111138ms for default service account to be created ...
	I1212 20:11:24.586072  295304 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 20:11:24.647399  295304 system_pods.go:86] 8 kube-system pods found
	I1212 20:11:24.647523  295304 system_pods.go:89] "coredns-66bc5c9577-zg2v9" [8b0daa17-68a0-4f3f-b50c-114a8218c542] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 20:11:24.647542  295304 system_pods.go:89] "etcd-embed-certs-399565" [ba75b498-a50f-48ae-9e09-c928ba04794f] Running
	I1212 20:11:24.647554  295304 system_pods.go:89] "kindnet-5fbmr" [6c2a5685-5864-4af2-a1ef-5f355fd2a95b] Running
	I1212 20:11:24.647561  295304 system_pods.go:89] "kube-apiserver-embed-certs-399565" [8850ea17-2667-403a-af36-83cdefa2548a] Running
	I1212 20:11:24.647571  295304 system_pods.go:89] "kube-controller-manager-embed-certs-399565" [5e04b62d-f4fd-4664-aee8-e9b0a4b015f0] Running
	I1212 20:11:24.647577  295304 system_pods.go:89] "kube-proxy-xgs9b" [82692b91-abfa-4ef0-915d-af7f57048d82] Running
	I1212 20:11:24.647585  295304 system_pods.go:89] "kube-scheduler-embed-certs-399565" [3f9b76ad-c6b0-4de4-86ad-2ca8b4fee658] Running
	I1212 20:11:24.647593  295304 system_pods.go:89] "storage-provisioner" [970ffc0a-f3a7-4981-a59e-f47762e9d53e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 20:11:24.647605  295304 system_pods.go:126] duration metric: took 61.52191ms to wait for k8s-apps to be running ...
	I1212 20:11:24.647616  295304 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 20:11:24.647668  295304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:11:24.664804  295304 system_svc.go:56] duration metric: took 17.177491ms WaitForService to wait for kubelet
	I1212 20:11:24.664843  295304 kubeadm.go:587] duration metric: took 11.414764652s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 20:11:24.664898  295304 node_conditions.go:102] verifying NodePressure condition ...
	I1212 20:11:24.668138  295304 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1212 20:11:24.668165  295304 node_conditions.go:123] node cpu capacity is 8
	I1212 20:11:24.668200  295304 node_conditions.go:105] duration metric: took 3.291438ms to run NodePressure ...
	I1212 20:11:24.668220  295304 start.go:242] waiting for startup goroutines ...
	I1212 20:11:24.668231  295304 start.go:247] waiting for cluster config update ...
	I1212 20:11:24.668239  295304 start.go:256] writing updated cluster config ...
	I1212 20:11:24.668592  295304 ssh_runner.go:195] Run: rm -f paused
	I1212 20:11:24.673111  295304 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 20:11:24.676968  295304 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zg2v9" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:11:25.682125  295304 pod_ready.go:94] pod "coredns-66bc5c9577-zg2v9" is "Ready"
	I1212 20:11:25.682149  295304 pod_ready.go:86] duration metric: took 1.005157522s for pod "coredns-66bc5c9577-zg2v9" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:11:25.684136  295304 pod_ready.go:83] waiting for pod "etcd-embed-certs-399565" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:11:25.687858  295304 pod_ready.go:94] pod "etcd-embed-certs-399565" is "Ready"
	I1212 20:11:25.687889  295304 pod_ready.go:86] duration metric: took 3.72592ms for pod "etcd-embed-certs-399565" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:11:25.689959  295304 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-399565" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:11:25.693623  295304 pod_ready.go:94] pod "kube-apiserver-embed-certs-399565" is "Ready"
	I1212 20:11:25.693651  295304 pod_ready.go:86] duration metric: took 3.658766ms for pod "kube-apiserver-embed-certs-399565" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:11:25.695401  295304 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-399565" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:11:25.881535  295304 pod_ready.go:94] pod "kube-controller-manager-embed-certs-399565" is "Ready"
	I1212 20:11:25.881558  295304 pod_ready.go:86] duration metric: took 186.140227ms for pod "kube-controller-manager-embed-certs-399565" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:11:26.081807  295304 pod_ready.go:83] waiting for pod "kube-proxy-xgs9b" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:11:26.480878  295304 pod_ready.go:94] pod "kube-proxy-xgs9b" is "Ready"
	I1212 20:11:26.480917  295304 pod_ready.go:86] duration metric: took 399.079893ms for pod "kube-proxy-xgs9b" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:11:26.681726  295304 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-399565" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:11:27.081455  295304 pod_ready.go:94] pod "kube-scheduler-embed-certs-399565" is "Ready"
	I1212 20:11:27.081485  295304 pod_ready.go:86] duration metric: took 399.730675ms for pod "kube-scheduler-embed-certs-399565" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:11:27.081501  295304 pod_ready.go:40] duration metric: took 2.408360015s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 20:11:27.127492  295304 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1212 20:11:27.130389  295304 out.go:179] * Done! kubectl is now configured to use "embed-certs-399565" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.196901435Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.200213204Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=0acce53a-005d-458b-9c5c-c502ac9e1da0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.200780154Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=df9196bd-29af-4756-bcdb-d9626dff5b95 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.201718035Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.202320606Z" level=info msg="Ran pod sandbox 2ded7466bf5acbdc0fa7469415d3650534bad29720a4944a6eed25dd523606ce with infra container: kube-system/kindnet-zpw2b/POD" id=0acce53a-005d-458b-9c5c-c502ac9e1da0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.202413069Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.203251842Z" level=info msg="Ran pod sandbox 45950668b15aff558dee8dbe3e4a3010379b07f69973797c94f81e655436f6c1 with infra container: kube-system/kube-proxy-x67v5/POD" id=df9196bd-29af-4756-bcdb-d9626dff5b95 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.203392999Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=5c0bd8e4-56b4-439c-93e9-9c116862ac74 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.204359514Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=a603d21d-f1e6-4cce-ad6c-e7e29b8df7ab name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.204366958Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=b957bf60-a217-4adc-9e4d-b29255c9f33a name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.205581153Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=a94c4394-b208-49db-9a4f-3e6aa18b606b name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.205805833Z" level=info msg="Creating container: kube-system/kindnet-zpw2b/kindnet-cni" id=5c9599bf-a698-47d4-a720-0f312dfa5712 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.205899607Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.206701859Z" level=info msg="Creating container: kube-system/kube-proxy-x67v5/kube-proxy" id=0a295639-0b0d-4377-a5c7-a7418a683e20 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.206819586Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.210993058Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.211578552Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.213347209Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.213726562Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.240940787Z" level=info msg="Created container 1a65b654bdace8f48adbaa2ad10141fbcf9cb7ef682a91ba514aeb6d10554697: kube-system/kindnet-zpw2b/kindnet-cni" id=5c9599bf-a698-47d4-a720-0f312dfa5712 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.24173612Z" level=info msg="Starting container: 1a65b654bdace8f48adbaa2ad10141fbcf9cb7ef682a91ba514aeb6d10554697" id=cda72eba-57b1-42e0-bcbe-c1a571d1dd00 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.244173727Z" level=info msg="Started container" PID=1060 containerID=1a65b654bdace8f48adbaa2ad10141fbcf9cb7ef682a91ba514aeb6d10554697 description=kube-system/kindnet-zpw2b/kindnet-cni id=cda72eba-57b1-42e0-bcbe-c1a571d1dd00 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2ded7466bf5acbdc0fa7469415d3650534bad29720a4944a6eed25dd523606ce
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.249301986Z" level=info msg="Created container 20ef95e4dd71f468167540741bdcb99d654b925b1576e557dbd0efeb504685b1: kube-system/kube-proxy-x67v5/kube-proxy" id=0a295639-0b0d-4377-a5c7-a7418a683e20 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.249808487Z" level=info msg="Starting container: 20ef95e4dd71f468167540741bdcb99d654b925b1576e557dbd0efeb504685b1" id=aaa488bb-ebba-4814-ab96-bef64d9b4834 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 20:11:21 newest-cni-832562 crio[522]: time="2025-12-12T20:11:21.252385017Z" level=info msg="Started container" PID=1061 containerID=20ef95e4dd71f468167540741bdcb99d654b925b1576e557dbd0efeb504685b1 description=kube-system/kube-proxy-x67v5/kube-proxy id=aaa488bb-ebba-4814-ab96-bef64d9b4834 name=/runtime.v1.RuntimeService/StartContainer sandboxID=45950668b15aff558dee8dbe3e4a3010379b07f69973797c94f81e655436f6c1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	20ef95e4dd71f       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   6 seconds ago       Running             kube-proxy                1                   45950668b15af       kube-proxy-x67v5                            kube-system
	1a65b654bdace       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   6 seconds ago       Running             kindnet-cni               1                   2ded7466bf5ac       kindnet-zpw2b                               kube-system
	302da1b0b4b49       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   8 seconds ago       Running             kube-apiserver            1                   626c6c89a894d       kube-apiserver-newest-cni-832562            kube-system
	f0a7c03f08d77       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   8 seconds ago       Running             kube-scheduler            1                   d03f6412e288a       kube-scheduler-newest-cni-832562            kube-system
	41418d6b64580       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   8 seconds ago       Running             kube-controller-manager   1                   f115bdf351268       kube-controller-manager-newest-cni-832562   kube-system
	cf33221a5bf25       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   8 seconds ago       Running             etcd                      1                   f56b859dbae4c       etcd-newest-cni-832562                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-832562
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-832562
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300
	                    minikube.k8s.io/name=newest-cni-832562
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T20_11_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 20:10:57 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-832562
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 20:11:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 20:11:20 +0000   Fri, 12 Dec 2025 20:10:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 20:11:20 +0000   Fri, 12 Dec 2025 20:10:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 20:11:20 +0000   Fri, 12 Dec 2025 20:10:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 12 Dec 2025 20:11:20 +0000   Fri, 12 Dec 2025 20:10:55 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-832562
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c9831a2813dcaeb3cc17b596693b7bac
	  System UUID:                02e0f34f-a5d1-439b-8544-2451e32971bb
	  Boot ID:                    50e046d9-33b0-4107-918f-f0a8d9513b10
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-832562                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-zpw2b                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      21s
	  kube-system                 kube-apiserver-newest-cni-832562             250m (3%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-controller-manager-newest-cni-832562    200m (2%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-proxy-x67v5                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         21s
	  kube-system                 kube-scheduler-newest-cni-832562             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  23s   node-controller  Node newest-cni-832562 event: Registered Node newest-cni-832562 in Controller
	  Normal  RegisteredNode  4s    node-controller  Node newest-cni-832562 event: Registered Node newest-cni-832562 in Controller
	
	
	==> dmesg <==
	[  +0.086327] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026088] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.855140] kauditd_printk_skb: 47 callbacks suppressed
	[Dec12 19:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.016395] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023890] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023861] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023912] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023894] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +2.047754] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +4.031589] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +8.448131] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[Dec12 19:32] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[ +32.252714] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	
	
	==> etcd [cf33221a5bf2511a5c4dcc0fef48a4b8caf2e2b4b846415a5686cd3646cae564] <==
	{"level":"warn","ts":"2025-12-12T20:11:20.198678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.205810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.212580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.219571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.228797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.235971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.241993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.248434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.254838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.262343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.270982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.277701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.287434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.294209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.301038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.307467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.314550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.328600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.335334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.341530Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.347926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.355359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.378499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.392464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:20.445843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36534","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:11:28 up 53 min,  0 user,  load average: 4.54, 2.50, 1.68
	Linux newest-cni-832562 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1a65b654bdace8f48adbaa2ad10141fbcf9cb7ef682a91ba514aeb6d10554697] <==
	I1212 20:11:21.427879       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1212 20:11:21.519225       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1212 20:11:21.519378       1 main.go:148] setting mtu 1500 for CNI 
	I1212 20:11:21.519401       1 main.go:178] kindnetd IP family: "ipv4"
	I1212 20:11:21.519432       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-12T20:11:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1212 20:11:21.631161       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1212 20:11:21.631217       1 controller.go:381] "Waiting for informer caches to sync"
	I1212 20:11:21.631231       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1212 20:11:21.631473       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1212 20:11:22.131735       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1212 20:11:22.131756       1 metrics.go:72] Registering metrics
	I1212 20:11:22.131820       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [302da1b0b4b49d8184afdd2afaccda38c21edb87a4612a1dc37701a62340f511] <==
	I1212 20:11:20.913755       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1212 20:11:20.913860       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1212 20:11:20.914812       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:20.914352       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1212 20:11:20.914863       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:20.914971       1 aggregator.go:187] initial CRD sync complete...
	I1212 20:11:20.915019       1 autoregister_controller.go:144] Starting autoregister controller
	I1212 20:11:20.915098       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 20:11:20.915131       1 cache.go:39] Caches are synced for autoregister controller
	I1212 20:11:20.915801       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:20.925261       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1212 20:11:20.926309       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1212 20:11:20.952521       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1212 20:11:20.973389       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 20:11:21.046599       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1212 20:11:21.256199       1 controller.go:667] quota admission added evaluator for: namespaces
	I1212 20:11:21.288216       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1212 20:11:21.308886       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 20:11:21.316650       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 20:11:21.362185       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.207.127"}
	I1212 20:11:21.378675       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.164.212"}
	I1212 20:11:21.816570       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1212 20:11:24.474028       1 controller.go:667] quota admission added evaluator for: endpoints
	I1212 20:11:24.527181       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1212 20:11:24.676674       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [41418d6b64580bd178a2682078ca82622588d0949f2b8a780d7e198c24ad245f] <==
	I1212 20:11:24.078123       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:24.079451       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:24.079533       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:24.079521       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:24.078135       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:24.079581       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:24.079625       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:24.078131       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:24.079726       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:24.078148       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:24.078137       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:24.079599       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:24.078141       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:24.080260       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:24.080340       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:24.080388       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:24.080429       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:24.080411       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:24.080327       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:24.085321       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:24.087221       1 shared_informer.go:370] "Waiting for caches to sync"
	I1212 20:11:24.179602       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:24.179628       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1212 20:11:24.179634       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1212 20:11:24.187464       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [20ef95e4dd71f468167540741bdcb99d654b925b1576e557dbd0efeb504685b1] <==
	I1212 20:11:21.294489       1 server_linux.go:53] "Using iptables proxy"
	I1212 20:11:21.374500       1 shared_informer.go:370] "Waiting for caches to sync"
	I1212 20:11:21.475444       1 shared_informer.go:377] "Caches are synced"
	I1212 20:11:21.475477       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1212 20:11:21.475614       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 20:11:21.499622       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 20:11:21.499683       1 server_linux.go:136] "Using iptables Proxier"
	I1212 20:11:21.505402       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 20:11:21.505795       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1212 20:11:21.505816       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 20:11:21.507328       1 config.go:106] "Starting endpoint slice config controller"
	I1212 20:11:21.507357       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 20:11:21.507388       1 config.go:200] "Starting service config controller"
	I1212 20:11:21.507394       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 20:11:21.507432       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 20:11:21.507444       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 20:11:21.507606       1 config.go:309] "Starting node config controller"
	I1212 20:11:21.507638       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 20:11:21.507652       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 20:11:21.607527       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1212 20:11:21.607547       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1212 20:11:21.607556       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [f0a7c03f08d77407822e1d8f041f02ceb34d3703a2fae8bc8ce0492d7f51f8d1] <==
	I1212 20:11:19.779837       1 serving.go:386] Generated self-signed cert in-memory
	W1212 20:11:20.867961       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1212 20:11:20.868105       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 20:11:20.868121       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1212 20:11:20.868161       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1212 20:11:20.902575       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1212 20:11:20.902611       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 20:11:20.905736       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 20:11:20.905772       1 shared_informer.go:370] "Waiting for caches to sync"
	I1212 20:11:20.905864       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1212 20:11:20.907025       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1212 20:11:21.006892       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 12 20:11:20 newest-cni-832562 kubelet[677]: I1212 20:11:20.930796     677 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-832562"
	Dec 12 20:11:20 newest-cni-832562 kubelet[677]: I1212 20:11:20.930944     677 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-832562"
	Dec 12 20:11:20 newest-cni-832562 kubelet[677]: I1212 20:11:20.955089     677 kubelet_node_status.go:123] "Node was previously registered" node="newest-cni-832562"
	Dec 12 20:11:20 newest-cni-832562 kubelet[677]: I1212 20:11:20.955383     677 kubelet_node_status.go:77] "Successfully registered node" node="newest-cni-832562"
	Dec 12 20:11:20 newest-cni-832562 kubelet[677]: I1212 20:11:20.955432     677 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 12 20:11:20 newest-cni-832562 kubelet[677]: E1212 20:11:20.958141     677 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-832562\" already exists" pod="kube-system/kube-scheduler-newest-cni-832562"
	Dec 12 20:11:20 newest-cni-832562 kubelet[677]: I1212 20:11:20.959017     677 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 12 20:11:20 newest-cni-832562 kubelet[677]: E1212 20:11:20.961405     677 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-832562\" already exists" pod="kube-system/kube-apiserver-newest-cni-832562"
	Dec 12 20:11:20 newest-cni-832562 kubelet[677]: E1212 20:11:20.962172     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-832562" containerName="kube-scheduler"
	Dec 12 20:11:20 newest-cni-832562 kubelet[677]: E1212 20:11:20.961426     677 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-832562\" already exists" pod="kube-system/etcd-newest-cni-832562"
	Dec 12 20:11:20 newest-cni-832562 kubelet[677]: E1212 20:11:20.962771     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-832562" containerName="kube-apiserver"
	Dec 12 20:11:20 newest-cni-832562 kubelet[677]: E1212 20:11:20.963072     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-832562" containerName="etcd"
	Dec 12 20:11:20 newest-cni-832562 kubelet[677]: I1212 20:11:20.991623     677 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 12 20:11:21 newest-cni-832562 kubelet[677]: I1212 20:11:21.044141     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62e57f5e-f9e9-4a12-8e87-0f95e2e0879d-lib-modules\") pod \"kube-proxy-x67v5\" (UID: \"62e57f5e-f9e9-4a12-8e87-0f95e2e0879d\") " pod="kube-system/kube-proxy-x67v5"
	Dec 12 20:11:21 newest-cni-832562 kubelet[677]: I1212 20:11:21.044193     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2340f364-5a1b-4ed7-89bc-3c9347238a44-lib-modules\") pod \"kindnet-zpw2b\" (UID: \"2340f364-5a1b-4ed7-89bc-3c9347238a44\") " pod="kube-system/kindnet-zpw2b"
	Dec 12 20:11:21 newest-cni-832562 kubelet[677]: I1212 20:11:21.044232     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/62e57f5e-f9e9-4a12-8e87-0f95e2e0879d-xtables-lock\") pod \"kube-proxy-x67v5\" (UID: \"62e57f5e-f9e9-4a12-8e87-0f95e2e0879d\") " pod="kube-system/kube-proxy-x67v5"
	Dec 12 20:11:21 newest-cni-832562 kubelet[677]: I1212 20:11:21.044256     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/2340f364-5a1b-4ed7-89bc-3c9347238a44-cni-cfg\") pod \"kindnet-zpw2b\" (UID: \"2340f364-5a1b-4ed7-89bc-3c9347238a44\") " pod="kube-system/kindnet-zpw2b"
	Dec 12 20:11:21 newest-cni-832562 kubelet[677]: I1212 20:11:21.044324     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2340f364-5a1b-4ed7-89bc-3c9347238a44-xtables-lock\") pod \"kindnet-zpw2b\" (UID: \"2340f364-5a1b-4ed7-89bc-3c9347238a44\") " pod="kube-system/kindnet-zpw2b"
	Dec 12 20:11:21 newest-cni-832562 kubelet[677]: E1212 20:11:21.940508     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-832562" containerName="etcd"
	Dec 12 20:11:21 newest-cni-832562 kubelet[677]: E1212 20:11:21.940620     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-832562" containerName="kube-scheduler"
	Dec 12 20:11:21 newest-cni-832562 kubelet[677]: E1212 20:11:21.940979     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-832562" containerName="kube-apiserver"
	Dec 12 20:11:22 newest-cni-832562 kubelet[677]: E1212 20:11:22.942550     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-832562" containerName="kube-scheduler"
	Dec 12 20:11:23 newest-cni-832562 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 12 20:11:23 newest-cni-832562 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 12 20:11:23 newest-cni-832562 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-832562 -n newest-cni-832562
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-832562 -n newest-cni-832562: exit status 2 (318.787502ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-832562 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-4762p storage-provisioner dashboard-metrics-scraper-867fb5f87b-tp6gm kubernetes-dashboard-b84665fb8-l9nc2
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-832562 describe pod coredns-7d764666f9-4762p storage-provisioner dashboard-metrics-scraper-867fb5f87b-tp6gm kubernetes-dashboard-b84665fb8-l9nc2
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-832562 describe pod coredns-7d764666f9-4762p storage-provisioner dashboard-metrics-scraper-867fb5f87b-tp6gm kubernetes-dashboard-b84665fb8-l9nc2: exit status 1 (59.860297ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-4762p" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-tp6gm" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-l9nc2" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-832562 describe pod coredns-7d764666f9-4762p storage-provisioner dashboard-metrics-scraper-867fb5f87b-tp6gm kubernetes-dashboard-b84665fb8-l9nc2: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (5.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-399565 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-399565 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (487.727443ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:11:35Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-399565 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-399565 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-399565 describe deploy/metrics-server -n kube-system: exit status 1 (71.437348ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-399565 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-399565
helpers_test.go:244: (dbg) docker inspect embed-certs-399565:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "71e8830a236d9369e8cea7538408472309deacfa903143b0db3a316298f76bf1",
	        "Created": "2025-12-12T20:10:48.358308511Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 297717,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T20:10:48.409836774Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ca69fb46d667873e297ff8975852d6be818eb624529e750e2b09ff0d3b0c367",
	        "ResolvConfPath": "/var/lib/docker/containers/71e8830a236d9369e8cea7538408472309deacfa903143b0db3a316298f76bf1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/71e8830a236d9369e8cea7538408472309deacfa903143b0db3a316298f76bf1/hostname",
	        "HostsPath": "/var/lib/docker/containers/71e8830a236d9369e8cea7538408472309deacfa903143b0db3a316298f76bf1/hosts",
	        "LogPath": "/var/lib/docker/containers/71e8830a236d9369e8cea7538408472309deacfa903143b0db3a316298f76bf1/71e8830a236d9369e8cea7538408472309deacfa903143b0db3a316298f76bf1-json.log",
	        "Name": "/embed-certs-399565",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-399565:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-399565",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "71e8830a236d9369e8cea7538408472309deacfa903143b0db3a316298f76bf1",
	                "LowerDir": "/var/lib/docker/overlay2/79b7657912b8e71e536eec636256b7f5706f9f6d36ba804943f0289661937da2-init/diff:/var/lib/docker/overlay2/87d8f1d6be832394c20ec55eb084b3f4a084ca786856584bd9e90cdb45432d3c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/79b7657912b8e71e536eec636256b7f5706f9f6d36ba804943f0289661937da2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/79b7657912b8e71e536eec636256b7f5706f9f6d36ba804943f0289661937da2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/79b7657912b8e71e536eec636256b7f5706f9f6d36ba804943f0289661937da2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-399565",
	                "Source": "/var/lib/docker/volumes/embed-certs-399565/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-399565",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-399565",
	                "name.minikube.sigs.k8s.io": "embed-certs-399565",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "910d612c35da3c38566c244016fd380c164ffd16c97e60f59e140330bdd00fc7",
	            "SandboxKey": "/var/run/docker/netns/910d612c35da",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-399565": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6c29c7e79781ac9639d4796d21d5075ddac5af9af8ecc99427d5e7f6d18273d7",
	                    "EndpointID": "14137448277f36b4a8107030f4bdf776e86a6e0112898e0feffee6ad88c9fd68",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "62:c3:bb:db:5d:36",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-399565",
	                        "71e8830a236d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-399565 -n embed-certs-399565
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-399565 logs -n 25
helpers_test.go:261: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ image   │ old-k8s-version-824670 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ pause   │ -p old-k8s-version-824670 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-991615                                                                                                                                                                                                                         │ kubernetes-upgrade-991615    │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ delete  │ -p old-k8s-version-824670                                                                                                                                                                                                                            │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ start   │ -p newest-cni-832562 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-832562            │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:11 UTC │
	│ delete  │ -p old-k8s-version-824670                                                                                                                                                                                                                            │ old-k8s-version-824670       │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ delete  │ -p disable-driver-mounts-044739                                                                                                                                                                                                                      │ disable-driver-mounts-044739 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ start   │ -p embed-certs-399565 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-399565           │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:11 UTC │
	│ image   │ no-preload-753103 image list --format=json                                                                                                                                                                                                           │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ pause   │ -p no-preload-753103 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ delete  │ -p no-preload-753103                                                                                                                                                                                                                                 │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ delete  │ -p no-preload-753103                                                                                                                                                                                                                                 │ no-preload-753103            │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ start   │ -p auto-789448 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                              │ auto-789448                  │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-832562 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-832562            │ jenkins │ v1.37.0 │ 12 Dec 25 20:11 UTC │                     │
	│ stop    │ -p newest-cni-832562 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-832562            │ jenkins │ v1.37.0 │ 12 Dec 25 20:11 UTC │ 12 Dec 25 20:11 UTC │
	│ addons  │ enable dashboard -p newest-cni-832562 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-832562            │ jenkins │ v1.37.0 │ 12 Dec 25 20:11 UTC │ 12 Dec 25 20:11 UTC │
	│ start   │ -p newest-cni-832562 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-832562            │ jenkins │ v1.37.0 │ 12 Dec 25 20:11 UTC │ 12 Dec 25 20:11 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-433034 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-433034 │ jenkins │ v1.37.0 │ 12 Dec 25 20:11 UTC │                     │
	│ image   │ newest-cni-832562 image list --format=json                                                                                                                                                                                                           │ newest-cni-832562            │ jenkins │ v1.37.0 │ 12 Dec 25 20:11 UTC │ 12 Dec 25 20:11 UTC │
	│ pause   │ -p newest-cni-832562 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-832562            │ jenkins │ v1.37.0 │ 12 Dec 25 20:11 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-433034 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-433034 │ jenkins │ v1.37.0 │ 12 Dec 25 20:11 UTC │                     │
	│ delete  │ -p newest-cni-832562                                                                                                                                                                                                                                 │ newest-cni-832562            │ jenkins │ v1.37.0 │ 12 Dec 25 20:11 UTC │ 12 Dec 25 20:11 UTC │
	│ delete  │ -p newest-cni-832562                                                                                                                                                                                                                                 │ newest-cni-832562            │ jenkins │ v1.37.0 │ 12 Dec 25 20:11 UTC │ 12 Dec 25 20:11 UTC │
	│ start   │ -p kindnet-789448 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                             │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:11 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-399565 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-399565           │ jenkins │ v1.37.0 │ 12 Dec 25 20:11 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:11:31
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:11:31.483177  312743 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:11:31.483452  312743 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:11:31.483462  312743 out.go:374] Setting ErrFile to fd 2...
	I1212 20:11:31.483469  312743 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:11:31.483698  312743 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 20:11:31.484356  312743 out.go:368] Setting JSON to false
	I1212 20:11:31.485542  312743 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3238,"bootTime":1765567053,"procs":289,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 20:11:31.485593  312743 start.go:143] virtualization: kvm guest
	I1212 20:11:31.487503  312743 out.go:179] * [kindnet-789448] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 20:11:31.488636  312743 notify.go:221] Checking for updates...
	I1212 20:11:31.488666  312743 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:11:31.489712  312743 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:11:31.490893  312743 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 20:11:31.492242  312743 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-5703/.minikube
	I1212 20:11:31.493236  312743 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 20:11:31.494205  312743 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:11:31.495783  312743 config.go:182] Loaded profile config "auto-789448": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:11:31.495876  312743 config.go:182] Loaded profile config "default-k8s-diff-port-433034": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:11:31.495963  312743 config.go:182] Loaded profile config "embed-certs-399565": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:11:31.496058  312743 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:11:31.522841  312743 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1212 20:11:31.522946  312743 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:11:31.578811  312743 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-12 20:11:31.568952216 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 20:11:31.578915  312743 docker.go:319] overlay module found
	I1212 20:11:31.580369  312743 out.go:179] * Using the docker driver based on user configuration
	I1212 20:11:31.581326  312743 start.go:309] selected driver: docker
	I1212 20:11:31.581337  312743 start.go:927] validating driver "docker" against <nil>
	I1212 20:11:31.581348  312743 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:11:31.581931  312743 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:11:31.635059  312743 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-12 20:11:31.626119905 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 20:11:31.635205  312743 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 20:11:31.635441  312743 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 20:11:31.636881  312743 out.go:179] * Using Docker driver with root privileges
	I1212 20:11:31.637794  312743 cni.go:84] Creating CNI manager for "kindnet"
	I1212 20:11:31.637813  312743 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 20:11:31.637901  312743 start.go:353] cluster config:
	{Name:kindnet-789448 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-789448 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:11:31.639148  312743 out.go:179] * Starting "kindnet-789448" primary control-plane node in "kindnet-789448" cluster
	I1212 20:11:31.640082  312743 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 20:11:31.641136  312743 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 20:11:31.642150  312743 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:11:31.642179  312743 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1212 20:11:31.642192  312743 cache.go:65] Caching tarball of preloaded images
	I1212 20:11:31.642248  312743 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 20:11:31.642305  312743 preload.go:238] Found /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 20:11:31.642322  312743 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 20:11:31.642460  312743 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/kindnet-789448/config.json ...
	I1212 20:11:31.642489  312743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/kindnet-789448/config.json: {Name:mk33a9e55bf56c0b95e103b889717ca23016d4b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:11:31.661654  312743 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 20:11:31.661670  312743 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 20:11:31.661685  312743 cache.go:243] Successfully downloaded all kic artifacts
	I1212 20:11:31.661716  312743 start.go:360] acquireMachinesLock for kindnet-789448: {Name:mkdcb8557518ff758c57215710bc2b42b6475967 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:11:31.661809  312743 start.go:364] duration metric: took 76.81µs to acquireMachinesLock for "kindnet-789448"
	I1212 20:11:31.661837  312743 start.go:93] Provisioning new machine with config: &{Name:kindnet-789448 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-789448 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:11:31.661906  312743 start.go:125] createHost starting for "" (driver="docker")
	W1212 20:11:30.834780  301411 node_ready.go:57] node "auto-789448" has "Ready":"False" status (will retry)
	W1212 20:11:32.835419  301411 node_ready.go:57] node "auto-789448" has "Ready":"False" status (will retry)
	I1212 20:11:34.937264  301411 node_ready.go:49] node "auto-789448" is "Ready"
	I1212 20:11:34.937325  301411 node_ready.go:38] duration metric: took 11.10541811s for node "auto-789448" to be "Ready" ...
	I1212 20:11:34.937342  301411 api_server.go:52] waiting for apiserver process to appear ...
	I1212 20:11:34.937934  301411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:11:34.951824  301411 api_server.go:72] duration metric: took 11.510784794s to wait for apiserver process to appear ...
	I1212 20:11:34.951847  301411 api_server.go:88] waiting for apiserver healthz status ...
	I1212 20:11:34.951872  301411 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 20:11:34.995590  301411 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1212 20:11:34.996592  301411 api_server.go:141] control plane version: v1.34.2
	I1212 20:11:34.996617  301411 api_server.go:131] duration metric: took 44.762458ms to wait for apiserver health ...
	I1212 20:11:34.996627  301411 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 20:11:35.150122  301411 system_pods.go:59] 8 kube-system pods found
	I1212 20:11:35.150170  301411 system_pods.go:61] "coredns-66bc5c9577-9zccb" [c27c2c44-0a3e-49f8-bf96-72d402ac08ab] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 20:11:35.150222  301411 system_pods.go:61] "etcd-auto-789448" [e7f5bfa3-6eb6-4c83-aa77-ad8eda150226] Running
	I1212 20:11:35.150229  301411 system_pods.go:61] "kindnet-5cqfs" [6f551195-0e2b-4b4c-a7ba-6efe652077ff] Running
	I1212 20:11:35.150239  301411 system_pods.go:61] "kube-apiserver-auto-789448" [341e258c-1f89-47e7-ae97-f6fa7fb7dc33] Running
	I1212 20:11:35.150249  301411 system_pods.go:61] "kube-controller-manager-auto-789448" [024d50d7-716d-4739-b159-f50fa260849c] Running
	I1212 20:11:35.150258  301411 system_pods.go:61] "kube-proxy-zf8hx" [a8f2c24b-0b1d-450c-900f-06349350d2cb] Running
	I1212 20:11:35.150264  301411 system_pods.go:61] "kube-scheduler-auto-789448" [075a4afc-5e53-47a0-98ed-0f8f86bf2e68] Running
	I1212 20:11:35.150296  301411 system_pods.go:61] "storage-provisioner" [dc76909e-d675-4b5c-95bf-b3d0b6f336ed] Pending
	I1212 20:11:35.150305  301411 system_pods.go:74] duration metric: took 153.67084ms to wait for pod list to return data ...
	I1212 20:11:35.150317  301411 default_sa.go:34] waiting for default service account to be created ...
	I1212 20:11:35.152660  301411 default_sa.go:45] found service account: "default"
	I1212 20:11:35.152682  301411 default_sa.go:55] duration metric: took 2.357468ms for default service account to be created ...
	I1212 20:11:35.152691  301411 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 20:11:35.155381  301411 system_pods.go:86] 8 kube-system pods found
	I1212 20:11:35.155412  301411 system_pods.go:89] "coredns-66bc5c9577-9zccb" [c27c2c44-0a3e-49f8-bf96-72d402ac08ab] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 20:11:35.155420  301411 system_pods.go:89] "etcd-auto-789448" [e7f5bfa3-6eb6-4c83-aa77-ad8eda150226] Running
	I1212 20:11:35.155428  301411 system_pods.go:89] "kindnet-5cqfs" [6f551195-0e2b-4b4c-a7ba-6efe652077ff] Running
	I1212 20:11:35.155438  301411 system_pods.go:89] "kube-apiserver-auto-789448" [341e258c-1f89-47e7-ae97-f6fa7fb7dc33] Running
	I1212 20:11:35.155443  301411 system_pods.go:89] "kube-controller-manager-auto-789448" [024d50d7-716d-4739-b159-f50fa260849c] Running
	I1212 20:11:35.155449  301411 system_pods.go:89] "kube-proxy-zf8hx" [a8f2c24b-0b1d-450c-900f-06349350d2cb] Running
	I1212 20:11:35.155454  301411 system_pods.go:89] "kube-scheduler-auto-789448" [075a4afc-5e53-47a0-98ed-0f8f86bf2e68] Running
	I1212 20:11:35.155464  301411 system_pods.go:89] "storage-provisioner" [dc76909e-d675-4b5c-95bf-b3d0b6f336ed] Pending
	I1212 20:11:35.155488  301411 retry.go:31] will retry after 195.720244ms: missing components: kube-dns
	
	
	==> CRI-O <==
	Dec 12 20:11:24 embed-certs-399565 crio[767]: time="2025-12-12T20:11:24.584644723Z" level=info msg="Starting container: 077b2cf1a371bf65974bb70983e5b4efd326f48926ff89a2c53d0fbf0c0c8242" id=12feb2bb-0f0e-44ce-98a3-0612d4ffa296 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 20:11:24 embed-certs-399565 crio[767]: time="2025-12-12T20:11:24.587190047Z" level=info msg="Started container" PID=1858 containerID=077b2cf1a371bf65974bb70983e5b4efd326f48926ff89a2c53d0fbf0c0c8242 description=kube-system/coredns-66bc5c9577-zg2v9/coredns id=12feb2bb-0f0e-44ce-98a3-0612d4ffa296 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ba6bec17556d4611eb836a318796e2f42fc22a1a794768acd57e25d988a99a9d
	Dec 12 20:11:27 embed-certs-399565 crio[767]: time="2025-12-12T20:11:27.585709124Z" level=info msg="Running pod sandbox: default/busybox/POD" id=c909e062-698a-496c-a4e0-f26f061be914 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 20:11:27 embed-certs-399565 crio[767]: time="2025-12-12T20:11:27.585787619Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:11:27 embed-certs-399565 crio[767]: time="2025-12-12T20:11:27.590824347Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:67fbf0fc4f3443519f2efb6d44ce995cf0657652ee9bd0dd7e277f1cf0c36ed8 UID:2b73ee4b-c108-4ada-b144-9eb629cde278 NetNS:/var/run/netns/db210595-c4e0-4538-92ab-64f213af8a1b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000352878}] Aliases:map[]}"
	Dec 12 20:11:27 embed-certs-399565 crio[767]: time="2025-12-12T20:11:27.59085949Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 12 20:11:27 embed-certs-399565 crio[767]: time="2025-12-12T20:11:27.601270124Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:67fbf0fc4f3443519f2efb6d44ce995cf0657652ee9bd0dd7e277f1cf0c36ed8 UID:2b73ee4b-c108-4ada-b144-9eb629cde278 NetNS:/var/run/netns/db210595-c4e0-4538-92ab-64f213af8a1b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000352878}] Aliases:map[]}"
	Dec 12 20:11:27 embed-certs-399565 crio[767]: time="2025-12-12T20:11:27.601437956Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 12 20:11:27 embed-certs-399565 crio[767]: time="2025-12-12T20:11:27.602319946Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 12 20:11:27 embed-certs-399565 crio[767]: time="2025-12-12T20:11:27.603374875Z" level=info msg="Ran pod sandbox 67fbf0fc4f3443519f2efb6d44ce995cf0657652ee9bd0dd7e277f1cf0c36ed8 with infra container: default/busybox/POD" id=c909e062-698a-496c-a4e0-f26f061be914 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 20:11:27 embed-certs-399565 crio[767]: time="2025-12-12T20:11:27.605779973Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5cbba5e5-e04b-488c-8750-79bd6a3215d7 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:11:27 embed-certs-399565 crio[767]: time="2025-12-12T20:11:27.605913112Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=5cbba5e5-e04b-488c-8750-79bd6a3215d7 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:11:27 embed-certs-399565 crio[767]: time="2025-12-12T20:11:27.60594754Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=5cbba5e5-e04b-488c-8750-79bd6a3215d7 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:11:27 embed-certs-399565 crio[767]: time="2025-12-12T20:11:27.606622525Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9b70f2a5-3974-4f0e-a1ec-fab785d5c4e6 name=/runtime.v1.ImageService/PullImage
	Dec 12 20:11:27 embed-certs-399565 crio[767]: time="2025-12-12T20:11:27.60809341Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 12 20:11:28 embed-certs-399565 crio[767]: time="2025-12-12T20:11:28.227444289Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=9b70f2a5-3974-4f0e-a1ec-fab785d5c4e6 name=/runtime.v1.ImageService/PullImage
	Dec 12 20:11:28 embed-certs-399565 crio[767]: time="2025-12-12T20:11:28.22814418Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=48827c4f-498a-4005-b6e6-b06d2cc79d94 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:11:28 embed-certs-399565 crio[767]: time="2025-12-12T20:11:28.229595668Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=960916c1-99e1-49da-aaa1-2741dd82fc7a name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:11:28 embed-certs-399565 crio[767]: time="2025-12-12T20:11:28.235438392Z" level=info msg="Creating container: default/busybox/busybox" id=8a1d56e7-0aa4-4d3e-87e5-b02ba31e5422 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:11:28 embed-certs-399565 crio[767]: time="2025-12-12T20:11:28.235552892Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:11:28 embed-certs-399565 crio[767]: time="2025-12-12T20:11:28.23975871Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:11:28 embed-certs-399565 crio[767]: time="2025-12-12T20:11:28.240319654Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:11:28 embed-certs-399565 crio[767]: time="2025-12-12T20:11:28.263691509Z" level=info msg="Created container e3e4ba2e76cda68969073a4e49fa96f2bd9330485f130855a678382ba47d5fc2: default/busybox/busybox" id=8a1d56e7-0aa4-4d3e-87e5-b02ba31e5422 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:11:28 embed-certs-399565 crio[767]: time="2025-12-12T20:11:28.264243002Z" level=info msg="Starting container: e3e4ba2e76cda68969073a4e49fa96f2bd9330485f130855a678382ba47d5fc2" id=d1c118b6-75af-441b-b452-64188227de4e name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 20:11:28 embed-certs-399565 crio[767]: time="2025-12-12T20:11:28.266135509Z" level=info msg="Started container" PID=1933 containerID=e3e4ba2e76cda68969073a4e49fa96f2bd9330485f130855a678382ba47d5fc2 description=default/busybox/busybox id=d1c118b6-75af-441b-b452-64188227de4e name=/runtime.v1.RuntimeService/StartContainer sandboxID=67fbf0fc4f3443519f2efb6d44ce995cf0657652ee9bd0dd7e277f1cf0c36ed8
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	e3e4ba2e76cda       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   67fbf0fc4f344       busybox                                      default
	077b2cf1a371b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   ba6bec17556d4       coredns-66bc5c9577-zg2v9                     kube-system
	af388bac85bd7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   95f60e4322bff       storage-provisioner                          kube-system
	8e4fbb90b0e1f       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                      23 seconds ago      Running             kube-proxy                0                   02112e8fe1552       kube-proxy-xgs9b                             kube-system
	553a09330bcd4       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   903bfe63d3486       kindnet-5fbmr                                kube-system
	9951e242ebeee       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                      33 seconds ago      Running             kube-scheduler            0                   c37677c17c51c       kube-scheduler-embed-certs-399565            kube-system
	4fdbd5e59c4ca       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                      33 seconds ago      Running             kube-controller-manager   0                   ad2a7d9b3d2ef       kube-controller-manager-embed-certs-399565   kube-system
	55ee531c4d3c8       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                      33 seconds ago      Running             kube-apiserver            0                   f6d3b7b8045b2       kube-apiserver-embed-certs-399565            kube-system
	34963ce468032       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      34 seconds ago      Running             etcd                      0                   f6f74d967ab6e       etcd-embed-certs-399565                      kube-system
	
	
	==> coredns [077b2cf1a371bf65974bb70983e5b4efd326f48926ff89a2c53d0fbf0c0c8242] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34565 - 24569 "HINFO IN 1333377455323674641.6867049087014212713. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.095136587s
	
	
	==> describe nodes <==
	Name:               embed-certs-399565
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-399565
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300
	                    minikube.k8s.io/name=embed-certs-399565
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T20_11_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 20:11:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-399565
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 20:11:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 20:11:28 +0000   Fri, 12 Dec 2025 20:11:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 20:11:28 +0000   Fri, 12 Dec 2025 20:11:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 20:11:28 +0000   Fri, 12 Dec 2025 20:11:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 20:11:28 +0000   Fri, 12 Dec 2025 20:11:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-399565
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c9831a2813dcaeb3cc17b596693b7bac
	  System UUID:                d4ee55d6-eeec-48fd-851e-1386ebc672fc
	  Boot ID:                    50e046d9-33b0-4107-918f-f0a8d9513b10
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-zg2v9                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-embed-certs-399565                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-5fbmr                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-embed-certs-399565             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-embed-certs-399565    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-xgs9b                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-embed-certs-399565             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 30s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node embed-certs-399565 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node embed-certs-399565 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node embed-certs-399565 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node embed-certs-399565 event: Registered Node embed-certs-399565 in Controller
	  Normal  NodeReady                13s   kubelet          Node embed-certs-399565 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.086327] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026088] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.855140] kauditd_printk_skb: 47 callbacks suppressed
	[Dec12 19:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.016395] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023890] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023861] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023912] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023894] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +2.047754] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +4.031589] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +8.448131] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[Dec12 19:32] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[ +32.252714] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	
	
	==> etcd [34963ce4680320346a94f17930c58a2748705da76555ee427e4aa4c7e26d6448] <==
	{"level":"warn","ts":"2025-12-12T20:11:04.242643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:04.250951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:04.261560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:04.269516Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:04.276663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:04.283794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:04.291732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:04.299428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:04.307466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:04.319964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:04.331422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:04.337905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:04.343996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:04.350765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:04.357445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:04.363953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:04.370716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:04.377173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:04.391193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:04.398151Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:04.404687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:04.420134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:04.426236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:04.432845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34414","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-12T20:11:34.798740Z","caller":"traceutil/trace.go:172","msg":"trace[23664701] transaction","detail":"{read_only:false; response_revision:435; number_of_response:1; }","duration":"128.790179ms","start":"2025-12-12T20:11:34.669931Z","end":"2025-12-12T20:11:34.798721Z","steps":["trace[23664701] 'process raft request'  (duration: 128.645055ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:11:37 up 54 min,  0 user,  load average: 4.09, 2.47, 1.68
	Linux embed-certs-399565 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [553a09330bcd4b91f69a9fddab1a69a40983ea83a705a7071c354daed30f832d] <==
	I1212 20:11:13.692404       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1212 20:11:13.692693       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1212 20:11:13.692856       1 main.go:148] setting mtu 1500 for CNI 
	I1212 20:11:13.692877       1 main.go:178] kindnetd IP family: "ipv4"
	I1212 20:11:13.692904       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-12T20:11:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1212 20:11:13.892937       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1212 20:11:13.892976       1 controller.go:381] "Waiting for informer caches to sync"
	I1212 20:11:13.892996       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1212 20:11:13.893594       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1212 20:11:14.193958       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1212 20:11:14.193993       1 metrics.go:72] Registering metrics
	I1212 20:11:14.194070       1 controller.go:711] "Syncing nftables rules"
	I1212 20:11:23.893440       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1212 20:11:23.893500       1 main.go:301] handling current node
	I1212 20:11:33.893970       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1212 20:11:33.894035       1 main.go:301] handling current node
	
	
	==> kube-apiserver [55ee531c4d3c8a9460eea370d74635873f6dc1a2c0524fc9989421ddd736e6a1] <==
	I1212 20:11:04.962987       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1212 20:11:04.964521       1 controller.go:667] quota admission added evaluator for: namespaces
	I1212 20:11:04.966008       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1212 20:11:04.968665       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 20:11:04.969910       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 20:11:04.970238       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1212 20:11:05.157560       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 20:11:05.867473       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1212 20:11:05.871451       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1212 20:11:05.871469       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 20:11:06.361178       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 20:11:06.395696       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 20:11:06.471850       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1212 20:11:06.478359       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1212 20:11:06.479418       1 controller.go:667] quota admission added evaluator for: endpoints
	I1212 20:11:06.483427       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 20:11:06.909019       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1212 20:11:07.699698       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1212 20:11:07.708724       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1212 20:11:07.721796       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1212 20:11:12.664259       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 20:11:12.676350       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 20:11:12.717173       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1212 20:11:13.009464       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1212 20:11:35.442024       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:37334: use of closed network connection
	
	
	==> kube-controller-manager [4fdbd5e59c4caef8a59e162acf3c9b0158d7145fc053d4494d1dbcf50a0a20c5] <==
	I1212 20:11:11.934907       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1212 20:11:11.934999       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1212 20:11:11.935037       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1212 20:11:11.935043       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1212 20:11:11.935050       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1212 20:11:11.942734       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-399565" podCIDRs=["10.244.0.0/24"]
	I1212 20:11:11.943882       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1212 20:11:11.951232       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1212 20:11:11.955597       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1212 20:11:11.955774       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1212 20:11:11.956952       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1212 20:11:11.956977       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1212 20:11:11.957000       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1212 20:11:11.957027       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1212 20:11:11.957068       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1212 20:11:11.957177       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1212 20:11:11.958214       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1212 20:11:11.958248       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1212 20:11:11.958296       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1212 20:11:11.958563       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1212 20:11:11.958665       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 20:11:11.960498       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1212 20:11:11.963139       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1212 20:11:11.986586       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1212 20:11:26.907987       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [8e4fbb90b0e1febe09d0c9cfa69426b7f4ab0bc8ff5ac7780344b4f27acb489a] <==
	I1212 20:11:13.522389       1 server_linux.go:53] "Using iptables proxy"
	I1212 20:11:13.593828       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1212 20:11:13.694959       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1212 20:11:13.695009       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1212 20:11:13.695114       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 20:11:13.715092       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 20:11:13.715173       1 server_linux.go:132] "Using iptables Proxier"
	I1212 20:11:13.720659       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 20:11:13.721148       1 server.go:527] "Version info" version="v1.34.2"
	I1212 20:11:13.721190       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 20:11:13.723005       1 config.go:200] "Starting service config controller"
	I1212 20:11:13.723042       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 20:11:13.723084       1 config.go:106] "Starting endpoint slice config controller"
	I1212 20:11:13.723100       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 20:11:13.723118       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 20:11:13.723131       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 20:11:13.723323       1 config.go:309] "Starting node config controller"
	I1212 20:11:13.723359       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 20:11:13.823228       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1212 20:11:13.823259       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1212 20:11:13.823304       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1212 20:11:13.823432       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [9951e242ebeeef618ccfcd395775aeec414b0ee9b16e2680546a03b1686d34b5] <==
	E1212 20:11:04.926861       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1212 20:11:04.926900       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1212 20:11:04.927058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1212 20:11:04.927049       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1212 20:11:04.927137       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1212 20:11:04.927148       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1212 20:11:04.927222       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1212 20:11:04.927269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1212 20:11:04.927262       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1212 20:11:04.927295       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1212 20:11:04.927430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1212 20:11:04.927471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1212 20:11:04.927524       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1212 20:11:05.760606       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1212 20:11:05.766925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1212 20:11:05.816401       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1212 20:11:05.870864       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1212 20:11:05.899096       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1212 20:11:05.899216       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1212 20:11:05.956358       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1212 20:11:05.960406       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1212 20:11:05.971463       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1212 20:11:06.027691       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1212 20:11:06.035786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1212 20:11:07.622626       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 12 20:11:08 embed-certs-399565 kubelet[1331]: I1212 20:11:08.612410    1331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-399565" podStartSLOduration=1.61238634 podStartE2EDuration="1.61238634s" podCreationTimestamp="2025-12-12 20:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 20:11:08.602369887 +0000 UTC m=+1.140987265" watchObservedRunningTime="2025-12-12 20:11:08.61238634 +0000 UTC m=+1.151003714"
	Dec 12 20:11:08 embed-certs-399565 kubelet[1331]: I1212 20:11:08.623267    1331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-399565" podStartSLOduration=1.6232441149999999 podStartE2EDuration="1.623244115s" podCreationTimestamp="2025-12-12 20:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 20:11:08.613888692 +0000 UTC m=+1.152506077" watchObservedRunningTime="2025-12-12 20:11:08.623244115 +0000 UTC m=+1.161861482"
	Dec 12 20:11:08 embed-certs-399565 kubelet[1331]: I1212 20:11:08.623662    1331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-399565" podStartSLOduration=1.623535688 podStartE2EDuration="1.623535688s" podCreationTimestamp="2025-12-12 20:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 20:11:08.623538511 +0000 UTC m=+1.162155881" watchObservedRunningTime="2025-12-12 20:11:08.623535688 +0000 UTC m=+1.162153079"
	Dec 12 20:11:08 embed-certs-399565 kubelet[1331]: I1212 20:11:08.643083    1331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-399565" podStartSLOduration=1.643063085 podStartE2EDuration="1.643063085s" podCreationTimestamp="2025-12-12 20:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 20:11:08.632979671 +0000 UTC m=+1.171597047" watchObservedRunningTime="2025-12-12 20:11:08.643063085 +0000 UTC m=+1.181680464"
	Dec 12 20:11:11 embed-certs-399565 kubelet[1331]: I1212 20:11:11.984132    1331 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 12 20:11:11 embed-certs-399565 kubelet[1331]: I1212 20:11:11.984983    1331 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 12 20:11:13 embed-certs-399565 kubelet[1331]: I1212 20:11:13.077252    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c2a5685-5864-4af2-a1ef-5f355fd2a95b-xtables-lock\") pod \"kindnet-5fbmr\" (UID: \"6c2a5685-5864-4af2-a1ef-5f355fd2a95b\") " pod="kube-system/kindnet-5fbmr"
	Dec 12 20:11:13 embed-certs-399565 kubelet[1331]: I1212 20:11:13.077337    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c2a5685-5864-4af2-a1ef-5f355fd2a95b-lib-modules\") pod \"kindnet-5fbmr\" (UID: \"6c2a5685-5864-4af2-a1ef-5f355fd2a95b\") " pod="kube-system/kindnet-5fbmr"
	Dec 12 20:11:13 embed-certs-399565 kubelet[1331]: I1212 20:11:13.077370    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84v9q\" (UniqueName: \"kubernetes.io/projected/6c2a5685-5864-4af2-a1ef-5f355fd2a95b-kube-api-access-84v9q\") pod \"kindnet-5fbmr\" (UID: \"6c2a5685-5864-4af2-a1ef-5f355fd2a95b\") " pod="kube-system/kindnet-5fbmr"
	Dec 12 20:11:13 embed-certs-399565 kubelet[1331]: I1212 20:11:13.077392    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/82692b91-abfa-4ef0-915d-af7f57048d82-kube-proxy\") pod \"kube-proxy-xgs9b\" (UID: \"82692b91-abfa-4ef0-915d-af7f57048d82\") " pod="kube-system/kube-proxy-xgs9b"
	Dec 12 20:11:13 embed-certs-399565 kubelet[1331]: I1212 20:11:13.077411    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5vjf\" (UniqueName: \"kubernetes.io/projected/82692b91-abfa-4ef0-915d-af7f57048d82-kube-api-access-d5vjf\") pod \"kube-proxy-xgs9b\" (UID: \"82692b91-abfa-4ef0-915d-af7f57048d82\") " pod="kube-system/kube-proxy-xgs9b"
	Dec 12 20:11:13 embed-certs-399565 kubelet[1331]: I1212 20:11:13.077439    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/82692b91-abfa-4ef0-915d-af7f57048d82-lib-modules\") pod \"kube-proxy-xgs9b\" (UID: \"82692b91-abfa-4ef0-915d-af7f57048d82\") " pod="kube-system/kube-proxy-xgs9b"
	Dec 12 20:11:13 embed-certs-399565 kubelet[1331]: I1212 20:11:13.077465    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6c2a5685-5864-4af2-a1ef-5f355fd2a95b-cni-cfg\") pod \"kindnet-5fbmr\" (UID: \"6c2a5685-5864-4af2-a1ef-5f355fd2a95b\") " pod="kube-system/kindnet-5fbmr"
	Dec 12 20:11:13 embed-certs-399565 kubelet[1331]: I1212 20:11:13.077513    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/82692b91-abfa-4ef0-915d-af7f57048d82-xtables-lock\") pod \"kube-proxy-xgs9b\" (UID: \"82692b91-abfa-4ef0-915d-af7f57048d82\") " pod="kube-system/kube-proxy-xgs9b"
	Dec 12 20:11:13 embed-certs-399565 kubelet[1331]: I1212 20:11:13.603951    1331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xgs9b" podStartSLOduration=0.603926621 podStartE2EDuration="603.926621ms" podCreationTimestamp="2025-12-12 20:11:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 20:11:13.603685861 +0000 UTC m=+6.142303248" watchObservedRunningTime="2025-12-12 20:11:13.603926621 +0000 UTC m=+6.142544000"
	Dec 12 20:11:19 embed-certs-399565 kubelet[1331]: I1212 20:11:19.414396    1331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-5fbmr" podStartSLOduration=6.414372027 podStartE2EDuration="6.414372027s" podCreationTimestamp="2025-12-12 20:11:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 20:11:13.613532567 +0000 UTC m=+6.152149946" watchObservedRunningTime="2025-12-12 20:11:19.414372027 +0000 UTC m=+11.952989401"
	Dec 12 20:11:24 embed-certs-399565 kubelet[1331]: I1212 20:11:24.174544    1331 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 12 20:11:24 embed-certs-399565 kubelet[1331]: I1212 20:11:24.257645    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqqkh\" (UniqueName: \"kubernetes.io/projected/970ffc0a-f3a7-4981-a59e-f47762e9d53e-kube-api-access-hqqkh\") pod \"storage-provisioner\" (UID: \"970ffc0a-f3a7-4981-a59e-f47762e9d53e\") " pod="kube-system/storage-provisioner"
	Dec 12 20:11:24 embed-certs-399565 kubelet[1331]: I1212 20:11:24.257683    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b0daa17-68a0-4f3f-b50c-114a8218c542-config-volume\") pod \"coredns-66bc5c9577-zg2v9\" (UID: \"8b0daa17-68a0-4f3f-b50c-114a8218c542\") " pod="kube-system/coredns-66bc5c9577-zg2v9"
	Dec 12 20:11:24 embed-certs-399565 kubelet[1331]: I1212 20:11:24.257703    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/970ffc0a-f3a7-4981-a59e-f47762e9d53e-tmp\") pod \"storage-provisioner\" (UID: \"970ffc0a-f3a7-4981-a59e-f47762e9d53e\") " pod="kube-system/storage-provisioner"
	Dec 12 20:11:24 embed-certs-399565 kubelet[1331]: I1212 20:11:24.257717    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqcdq\" (UniqueName: \"kubernetes.io/projected/8b0daa17-68a0-4f3f-b50c-114a8218c542-kube-api-access-kqcdq\") pod \"coredns-66bc5c9577-zg2v9\" (UID: \"8b0daa17-68a0-4f3f-b50c-114a8218c542\") " pod="kube-system/coredns-66bc5c9577-zg2v9"
	Dec 12 20:11:24 embed-certs-399565 kubelet[1331]: I1212 20:11:24.646908    1331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-zg2v9" podStartSLOduration=11.646882342 podStartE2EDuration="11.646882342s" podCreationTimestamp="2025-12-12 20:11:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 20:11:24.64637449 +0000 UTC m=+17.184991870" watchObservedRunningTime="2025-12-12 20:11:24.646882342 +0000 UTC m=+17.185499722"
	Dec 12 20:11:25 embed-certs-399565 kubelet[1331]: I1212 20:11:25.648055    1331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.648032382 podStartE2EDuration="12.648032382s" podCreationTimestamp="2025-12-12 20:11:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 20:11:24.659231475 +0000 UTC m=+17.197848854" watchObservedRunningTime="2025-12-12 20:11:25.648032382 +0000 UTC m=+18.186649761"
	Dec 12 20:11:27 embed-certs-399565 kubelet[1331]: I1212 20:11:27.377896    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9v4v\" (UniqueName: \"kubernetes.io/projected/2b73ee4b-c108-4ada-b144-9eb629cde278-kube-api-access-q9v4v\") pod \"busybox\" (UID: \"2b73ee4b-c108-4ada-b144-9eb629cde278\") " pod="default/busybox"
	Dec 12 20:11:28 embed-certs-399565 kubelet[1331]: I1212 20:11:28.654671    1331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.031778027 podStartE2EDuration="1.654647009s" podCreationTimestamp="2025-12-12 20:11:27 +0000 UTC" firstStartedPulling="2025-12-12 20:11:27.606226815 +0000 UTC m=+20.144844188" lastFinishedPulling="2025-12-12 20:11:28.229095798 +0000 UTC m=+20.767713170" observedRunningTime="2025-12-12 20:11:28.654208281 +0000 UTC m=+21.192825661" watchObservedRunningTime="2025-12-12 20:11:28.654647009 +0000 UTC m=+21.193264388"
	
	
	==> storage-provisioner [af388bac85bd7aa4b3cdf80e6741fbbdfbdeaaa6183ec4ddbdef0c6cb551dd2c] <==
	I1212 20:11:24.595932       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 20:11:24.605646       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 20:11:24.605715       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1212 20:11:24.608413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:11:24.615099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1212 20:11:24.615374       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 20:11:24.616031       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-399565_c8290329-72f6-4983-84e1-7b3d821e9afa!
	I1212 20:11:24.615559       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9ba9b917-4c14-4eae-ad77-6eb4b88284eb", APIVersion:"v1", ResourceVersion:"406", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-399565_c8290329-72f6-4983-84e1-7b3d821e9afa became leader
	W1212 20:11:24.620678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:11:24.631231       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1212 20:11:24.717137       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-399565_c8290329-72f6-4983-84e1-7b3d821e9afa!
	W1212 20:11:26.635045       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:11:26.639449       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:11:28.643064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:11:28.648837       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:11:30.651417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:11:30.655720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:11:32.659539       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:11:32.663703       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:11:34.667527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:11:34.799807       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:11:36.802871       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:11:36.806981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-399565 -n embed-certs-399565
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-399565 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-433034 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-433034 --alsologtostderr -v=1: exit status 80 (2.064193582s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-433034 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:12:37.489456  330466 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:12:37.489555  330466 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:12:37.489565  330466 out.go:374] Setting ErrFile to fd 2...
	I1212 20:12:37.489569  330466 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:12:37.489762  330466 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 20:12:37.489971  330466 out.go:368] Setting JSON to false
	I1212 20:12:37.489984  330466 mustload.go:66] Loading cluster: default-k8s-diff-port-433034
	I1212 20:12:37.490383  330466 config.go:182] Loaded profile config "default-k8s-diff-port-433034": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:12:37.490888  330466 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-433034 --format={{.State.Status}}
	I1212 20:12:37.519049  330466 host.go:66] Checking if "default-k8s-diff-port-433034" exists ...
	I1212 20:12:37.519492  330466 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:12:37.604494  330466 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:85 SystemTime:2025-12-12 20:12:37.589156606 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 20:12:37.605554  330466 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22112/minikube-v1.37.0-1765505725-22112-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765505725-22112/minikube-v1.37.0-1765505725-22112-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765505725-22112-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-433034 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1212 20:12:37.607748  330466 out.go:179] * Pausing node default-k8s-diff-port-433034 ... 
	I1212 20:12:37.609306  330466 host.go:66] Checking if "default-k8s-diff-port-433034" exists ...
	I1212 20:12:37.609612  330466 ssh_runner.go:195] Run: systemctl --version
	I1212 20:12:37.609662  330466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-433034
	I1212 20:12:37.633264  330466 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/default-k8s-diff-port-433034/id_rsa Username:docker}
	I1212 20:12:37.736555  330466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:12:37.752994  330466 pause.go:52] kubelet running: true
	I1212 20:12:37.753056  330466 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 20:12:37.988717  330466 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 20:12:37.988804  330466 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 20:12:38.073032  330466 cri.go:89] found id: "a7a62a905d3ffa40362f751b92853933e880e605a4d8a54419cf3edf75e3bae8"
	I1212 20:12:38.073078  330466 cri.go:89] found id: "8219c3982e2a00c14a01654ae80b4054af8d527ae5e2473d70b4f644e062d30c"
	I1212 20:12:38.073085  330466 cri.go:89] found id: "88edfb91cf4a038250228f682d3173e413b779bc18321abc13169b2fa6574901"
	I1212 20:12:38.073089  330466 cri.go:89] found id: "004cb4da4fc4a28cc850784ef818eb4543cdf0dedee9670ac227fada50f58160"
	I1212 20:12:38.073094  330466 cri.go:89] found id: "8fc7fbe67821e88822d5c7655631e923b3b25aec05a2aec07ab906239a66992a"
	I1212 20:12:38.073099  330466 cri.go:89] found id: "ebeb10d45d10d2c655391f363492fcf212271217062b328f88a67404cc971388"
	I1212 20:12:38.073112  330466 cri.go:89] found id: "db085ca1f08ebad1a72de68de42b83fd3c82a1ed0f265e1e74983cd5d88ae7f5"
	I1212 20:12:38.073117  330466 cri.go:89] found id: "6edfed35b96f2b2cbb9c54cdfbf440c89b72c03fc6a8947569d87276098e3d6e"
	I1212 20:12:38.073121  330466 cri.go:89] found id: "261b4a83ad82d0b63e1a0022703c411f8ddd6b03f5cbf86192b1fbce85653f93"
	I1212 20:12:38.073142  330466 cri.go:89] found id: "4da0adad794baa248ec8fa2c45da542a6e93bc48468a007f657cbc26a9af53e9"
	I1212 20:12:38.073150  330466 cri.go:89] found id: "57f988954b100c48adbf59a94719d00d2d865dfab8b794ee332c80fa4b999f24"
	I1212 20:12:38.073155  330466 cri.go:89] found id: ""
	I1212 20:12:38.073207  330466 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 20:12:38.088689  330466 retry.go:31] will retry after 238.186061ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:12:38Z" level=error msg="open /run/runc: no such file or directory"
	I1212 20:12:38.327103  330466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:12:38.343329  330466 pause.go:52] kubelet running: false
	I1212 20:12:38.343388  330466 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 20:12:38.532363  330466 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 20:12:38.532447  330466 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 20:12:38.624580  330466 cri.go:89] found id: "a7a62a905d3ffa40362f751b92853933e880e605a4d8a54419cf3edf75e3bae8"
	I1212 20:12:38.624614  330466 cri.go:89] found id: "8219c3982e2a00c14a01654ae80b4054af8d527ae5e2473d70b4f644e062d30c"
	I1212 20:12:38.624620  330466 cri.go:89] found id: "88edfb91cf4a038250228f682d3173e413b779bc18321abc13169b2fa6574901"
	I1212 20:12:38.624626  330466 cri.go:89] found id: "004cb4da4fc4a28cc850784ef818eb4543cdf0dedee9670ac227fada50f58160"
	I1212 20:12:38.624631  330466 cri.go:89] found id: "8fc7fbe67821e88822d5c7655631e923b3b25aec05a2aec07ab906239a66992a"
	I1212 20:12:38.624637  330466 cri.go:89] found id: "ebeb10d45d10d2c655391f363492fcf212271217062b328f88a67404cc971388"
	I1212 20:12:38.624641  330466 cri.go:89] found id: "db085ca1f08ebad1a72de68de42b83fd3c82a1ed0f265e1e74983cd5d88ae7f5"
	I1212 20:12:38.624646  330466 cri.go:89] found id: "6edfed35b96f2b2cbb9c54cdfbf440c89b72c03fc6a8947569d87276098e3d6e"
	I1212 20:12:38.624650  330466 cri.go:89] found id: "261b4a83ad82d0b63e1a0022703c411f8ddd6b03f5cbf86192b1fbce85653f93"
	I1212 20:12:38.624665  330466 cri.go:89] found id: "4da0adad794baa248ec8fa2c45da542a6e93bc48468a007f657cbc26a9af53e9"
	I1212 20:12:38.624669  330466 cri.go:89] found id: "57f988954b100c48adbf59a94719d00d2d865dfab8b794ee332c80fa4b999f24"
	I1212 20:12:38.624673  330466 cri.go:89] found id: ""
	I1212 20:12:38.624746  330466 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 20:12:38.641230  330466 retry.go:31] will retry after 547.520035ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:12:38Z" level=error msg="open /run/runc: no such file or directory"
	I1212 20:12:39.188950  330466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:12:39.204169  330466 pause.go:52] kubelet running: false
	I1212 20:12:39.204226  330466 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 20:12:39.382834  330466 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 20:12:39.382929  330466 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 20:12:39.455362  330466 cri.go:89] found id: "a7a62a905d3ffa40362f751b92853933e880e605a4d8a54419cf3edf75e3bae8"
	I1212 20:12:39.455388  330466 cri.go:89] found id: "8219c3982e2a00c14a01654ae80b4054af8d527ae5e2473d70b4f644e062d30c"
	I1212 20:12:39.455394  330466 cri.go:89] found id: "88edfb91cf4a038250228f682d3173e413b779bc18321abc13169b2fa6574901"
	I1212 20:12:39.455399  330466 cri.go:89] found id: "004cb4da4fc4a28cc850784ef818eb4543cdf0dedee9670ac227fada50f58160"
	I1212 20:12:39.455404  330466 cri.go:89] found id: "8fc7fbe67821e88822d5c7655631e923b3b25aec05a2aec07ab906239a66992a"
	I1212 20:12:39.455409  330466 cri.go:89] found id: "ebeb10d45d10d2c655391f363492fcf212271217062b328f88a67404cc971388"
	I1212 20:12:39.455414  330466 cri.go:89] found id: "db085ca1f08ebad1a72de68de42b83fd3c82a1ed0f265e1e74983cd5d88ae7f5"
	I1212 20:12:39.455418  330466 cri.go:89] found id: "6edfed35b96f2b2cbb9c54cdfbf440c89b72c03fc6a8947569d87276098e3d6e"
	I1212 20:12:39.455423  330466 cri.go:89] found id: "261b4a83ad82d0b63e1a0022703c411f8ddd6b03f5cbf86192b1fbce85653f93"
	I1212 20:12:39.455453  330466 cri.go:89] found id: "4da0adad794baa248ec8fa2c45da542a6e93bc48468a007f657cbc26a9af53e9"
	I1212 20:12:39.455461  330466 cri.go:89] found id: "57f988954b100c48adbf59a94719d00d2d865dfab8b794ee332c80fa4b999f24"
	I1212 20:12:39.455466  330466 cri.go:89] found id: ""
	I1212 20:12:39.455517  330466 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 20:12:39.471830  330466 out.go:203] 
	W1212 20:12:39.473200  330466 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:12:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:12:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 20:12:39.473220  330466 out.go:285] * 
	* 
	W1212 20:12:39.477266  330466 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:12:39.478378  330466 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-433034 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-433034
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-433034:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fd3264bb0f47e10f5ed2d4066292d5de07bb0c2a43611792ae5335ebb6ca06a7",
	        "Created": "2025-12-12T20:10:35.289904623Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 315795,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T20:11:40.412088111Z",
	            "FinishedAt": "2025-12-12T20:11:39.288294952Z"
	        },
	        "Image": "sha256:1ca69fb46d667873e297ff8975852d6be818eb624529e750e2b09ff0d3b0c367",
	        "ResolvConfPath": "/var/lib/docker/containers/fd3264bb0f47e10f5ed2d4066292d5de07bb0c2a43611792ae5335ebb6ca06a7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fd3264bb0f47e10f5ed2d4066292d5de07bb0c2a43611792ae5335ebb6ca06a7/hostname",
	        "HostsPath": "/var/lib/docker/containers/fd3264bb0f47e10f5ed2d4066292d5de07bb0c2a43611792ae5335ebb6ca06a7/hosts",
	        "LogPath": "/var/lib/docker/containers/fd3264bb0f47e10f5ed2d4066292d5de07bb0c2a43611792ae5335ebb6ca06a7/fd3264bb0f47e10f5ed2d4066292d5de07bb0c2a43611792ae5335ebb6ca06a7-json.log",
	        "Name": "/default-k8s-diff-port-433034",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-433034:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-433034",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fd3264bb0f47e10f5ed2d4066292d5de07bb0c2a43611792ae5335ebb6ca06a7",
	                "LowerDir": "/var/lib/docker/overlay2/16fd782b0a201b5189823b9a6925e35312bdc767755b365cfae5b065abc49f14-init/diff:/var/lib/docker/overlay2/87d8f1d6be832394c20ec55eb084b3f4a084ca786856584bd9e90cdb45432d3c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/16fd782b0a201b5189823b9a6925e35312bdc767755b365cfae5b065abc49f14/merged",
	                "UpperDir": "/var/lib/docker/overlay2/16fd782b0a201b5189823b9a6925e35312bdc767755b365cfae5b065abc49f14/diff",
	                "WorkDir": "/var/lib/docker/overlay2/16fd782b0a201b5189823b9a6925e35312bdc767755b365cfae5b065abc49f14/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-433034",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-433034/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-433034",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-433034",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-433034",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "fe8f411539cb5b958f101a44e69299365945f917a48886e77ad3390bdbf3230e",
	            "SandboxKey": "/var/run/docker/netns/fe8f411539cb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-433034": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9682428112d69f44e5ab9b8a0895f7f7dfc5a7aa9a7423b8acd6944687003e6d",
	                    "EndpointID": "f684e000dbeda3def689071789d225deac1bbbd4d1715137a1149064606143d7",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "26:29:1d:43:58:88",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-433034",
	                        "fd3264bb0f47"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-433034 -n default-k8s-diff-port-433034
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-433034 -n default-k8s-diff-port-433034: exit status 2 (333.366602ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-433034 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-433034 logs -n 25: (1.300794006s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                  ARGS                                                                  │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-789448 sudo systemctl status containerd --all --full --no-pager                                                                │ auto-789448                  │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │                     │
	│ ssh     │ -p auto-789448 sudo systemctl cat containerd --no-pager                                                                                │ auto-789448                  │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p auto-789448 sudo cat /lib/systemd/system/containerd.service                                                                         │ auto-789448                  │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p auto-789448 sudo cat /etc/containerd/config.toml                                                                                    │ auto-789448                  │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p auto-789448 sudo containerd config dump                                                                                             │ auto-789448                  │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p auto-789448 sudo systemctl status crio --all --full --no-pager                                                                      │ auto-789448                  │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p auto-789448 sudo systemctl cat crio --no-pager                                                                                      │ auto-789448                  │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p auto-789448 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                            │ auto-789448                  │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p auto-789448 sudo crio config                                                                                                        │ auto-789448                  │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ delete  │ -p auto-789448                                                                                                                         │ auto-789448                  │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ start   │ -p calico-789448 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio │ calico-789448                │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │                     │
	│ ssh     │ -p kindnet-789448 pgrep -a kubelet                                                                                                     │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p kindnet-789448 sudo cat /etc/nsswitch.conf                                                                                          │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p kindnet-789448 sudo cat /etc/hosts                                                                                                  │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p kindnet-789448 sudo cat /etc/resolv.conf                                                                                            │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p kindnet-789448 sudo crictl pods                                                                                                     │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p kindnet-789448 sudo crictl ps --all                                                                                                 │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ image   │ default-k8s-diff-port-433034 image list --format=json                                                                                  │ default-k8s-diff-port-433034 │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p kindnet-789448 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;                                                          │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ pause   │ -p default-k8s-diff-port-433034 --alsologtostderr -v=1                                                                                 │ default-k8s-diff-port-433034 │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │                     │
	│ ssh     │ -p kindnet-789448 sudo ip a s                                                                                                          │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p kindnet-789448 sudo ip r s                                                                                                          │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p kindnet-789448 sudo iptables-save                                                                                                   │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p kindnet-789448 sudo iptables -t nat -L -n -v                                                                                        │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p kindnet-789448 sudo systemctl status kubelet --all --full --no-pager                                                                │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:12:08
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:12:08.632296  325830 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:12:08.632582  325830 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:12:08.632596  325830 out.go:374] Setting ErrFile to fd 2...
	I1212 20:12:08.632603  325830 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:12:08.632824  325830 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 20:12:08.633259  325830 out.go:368] Setting JSON to false
	I1212 20:12:08.634466  325830 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3276,"bootTime":1765567053,"procs":343,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 20:12:08.634526  325830 start.go:143] virtualization: kvm guest
	I1212 20:12:08.636287  325830 out.go:179] * [calico-789448] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 20:12:08.637783  325830 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:12:08.637805  325830 notify.go:221] Checking for updates...
	I1212 20:12:08.640527  325830 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:12:08.641583  325830 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 20:12:08.642550  325830 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-5703/.minikube
	I1212 20:12:08.643531  325830 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 20:12:08.644515  325830 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:12:08.646531  325830 config.go:182] Loaded profile config "default-k8s-diff-port-433034": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:12:08.646657  325830 config.go:182] Loaded profile config "embed-certs-399565": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:12:08.646775  325830 config.go:182] Loaded profile config "kindnet-789448": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:12:08.646891  325830 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:12:08.670016  325830 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1212 20:12:08.670145  325830 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:12:08.727837  325830 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-12 20:12:08.718115531 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 20:12:08.727925  325830 docker.go:319] overlay module found
	I1212 20:12:08.729872  325830 out.go:179] * Using the docker driver based on user configuration
	I1212 20:12:08.730902  325830 start.go:309] selected driver: docker
	I1212 20:12:08.730915  325830 start.go:927] validating driver "docker" against <nil>
	I1212 20:12:08.730925  325830 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:12:08.731443  325830 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:12:08.791953  325830 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-12 20:12:08.781383266 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 20:12:08.792158  325830 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 20:12:08.792425  325830 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 20:12:08.794964  325830 out.go:179] * Using Docker driver with root privileges
	I1212 20:12:08.796196  325830 cni.go:84] Creating CNI manager for "calico"
	I1212 20:12:08.796219  325830 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1212 20:12:08.796329  325830 start.go:353] cluster config:
	{Name:calico-789448 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-789448 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:12:08.797541  325830 out.go:179] * Starting "calico-789448" primary control-plane node in "calico-789448" cluster
	I1212 20:12:08.798589  325830 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 20:12:08.799683  325830 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 20:12:08.800742  325830 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:12:08.800775  325830 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1212 20:12:08.800794  325830 cache.go:65] Caching tarball of preloaded images
	I1212 20:12:08.800842  325830 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 20:12:08.800906  325830 preload.go:238] Found /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 20:12:08.800923  325830 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 20:12:08.801043  325830 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/config.json ...
	I1212 20:12:08.801081  325830 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/config.json: {Name:mk8011d9b30e95660856db3433c630354f571ce2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:12:08.821298  325830 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 20:12:08.821318  325830 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 20:12:08.821347  325830 cache.go:243] Successfully downloaded all kic artifacts
	I1212 20:12:08.821385  325830 start.go:360] acquireMachinesLock for calico-789448: {Name:mk7f96f34e4f60fdbf53c82f6cb4ee1f554e00e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:12:08.821483  325830 start.go:364] duration metric: took 78.792µs to acquireMachinesLock for "calico-789448"
	I1212 20:12:08.821510  325830 start.go:93] Provisioning new machine with config: &{Name:calico-789448 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-789448 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:12:08.821572  325830 start.go:125] createHost starting for "" (driver="docker")
	I1212 20:12:05.105685  319249 addons.go:530] duration metric: took 2.291229446s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1212 20:12:05.581405  319249 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1212 20:12:05.585818  319249 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 20:12:05.585845  319249 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 20:12:06.081383  319249 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1212 20:12:06.086030  319249 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1212 20:12:06.087196  319249 api_server.go:141] control plane version: v1.34.2
	I1212 20:12:06.087223  319249 api_server.go:131] duration metric: took 1.006598775s to wait for apiserver health ...
	I1212 20:12:06.087234  319249 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 20:12:06.091799  319249 system_pods.go:59] 8 kube-system pods found
	I1212 20:12:06.091840  319249 system_pods.go:61] "coredns-66bc5c9577-zg2v9" [8b0daa17-68a0-4f3f-b50c-114a8218c542] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 20:12:06.091853  319249 system_pods.go:61] "etcd-embed-certs-399565" [ba75b498-a50f-48ae-9e09-c928ba04794f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 20:12:06.091863  319249 system_pods.go:61] "kindnet-5fbmr" [6c2a5685-5864-4af2-a1ef-5f355fd2a95b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1212 20:12:06.091873  319249 system_pods.go:61] "kube-apiserver-embed-certs-399565" [8850ea17-2667-403a-af36-83cdefa2548a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 20:12:06.091881  319249 system_pods.go:61] "kube-controller-manager-embed-certs-399565" [5e04b62d-f4fd-4664-aee8-e9b0a4b015f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 20:12:06.091896  319249 system_pods.go:61] "kube-proxy-xgs9b" [82692b91-abfa-4ef0-915d-af7f57048d82] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 20:12:06.091904  319249 system_pods.go:61] "kube-scheduler-embed-certs-399565" [3f9b76ad-c6b0-4de4-86ad-2ca8b4fee658] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 20:12:06.091921  319249 system_pods.go:61] "storage-provisioner" [970ffc0a-f3a7-4981-a59e-f47762e9d53e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 20:12:06.091931  319249 system_pods.go:74] duration metric: took 4.690837ms to wait for pod list to return data ...
	I1212 20:12:06.091941  319249 default_sa.go:34] waiting for default service account to be created ...
	I1212 20:12:06.094724  319249 default_sa.go:45] found service account: "default"
	I1212 20:12:06.094742  319249 default_sa.go:55] duration metric: took 2.792846ms for default service account to be created ...
	I1212 20:12:06.094750  319249 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 20:12:06.097534  319249 system_pods.go:86] 8 kube-system pods found
	I1212 20:12:06.097562  319249 system_pods.go:89] "coredns-66bc5c9577-zg2v9" [8b0daa17-68a0-4f3f-b50c-114a8218c542] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 20:12:06.097575  319249 system_pods.go:89] "etcd-embed-certs-399565" [ba75b498-a50f-48ae-9e09-c928ba04794f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 20:12:06.097586  319249 system_pods.go:89] "kindnet-5fbmr" [6c2a5685-5864-4af2-a1ef-5f355fd2a95b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1212 20:12:06.097600  319249 system_pods.go:89] "kube-apiserver-embed-certs-399565" [8850ea17-2667-403a-af36-83cdefa2548a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 20:12:06.097614  319249 system_pods.go:89] "kube-controller-manager-embed-certs-399565" [5e04b62d-f4fd-4664-aee8-e9b0a4b015f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 20:12:06.097626  319249 system_pods.go:89] "kube-proxy-xgs9b" [82692b91-abfa-4ef0-915d-af7f57048d82] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 20:12:06.097634  319249 system_pods.go:89] "kube-scheduler-embed-certs-399565" [3f9b76ad-c6b0-4de4-86ad-2ca8b4fee658] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 20:12:06.097643  319249 system_pods.go:89] "storage-provisioner" [970ffc0a-f3a7-4981-a59e-f47762e9d53e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 20:12:06.097652  319249 system_pods.go:126] duration metric: took 2.895822ms to wait for k8s-apps to be running ...
	I1212 20:12:06.097664  319249 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 20:12:06.097708  319249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:12:06.114598  319249 system_svc.go:56] duration metric: took 16.926454ms WaitForService to wait for kubelet
	I1212 20:12:06.114624  319249 kubeadm.go:587] duration metric: took 3.300225537s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 20:12:06.114641  319249 node_conditions.go:102] verifying NodePressure condition ...
	I1212 20:12:06.117538  319249 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1212 20:12:06.117562  319249 node_conditions.go:123] node cpu capacity is 8
	I1212 20:12:06.117586  319249 node_conditions.go:105] duration metric: took 2.939227ms to run NodePressure ...
	I1212 20:12:06.117604  319249 start.go:242] waiting for startup goroutines ...
	I1212 20:12:06.117617  319249 start.go:247] waiting for cluster config update ...
	I1212 20:12:06.117634  319249 start.go:256] writing updated cluster config ...
	I1212 20:12:06.117913  319249 ssh_runner.go:195] Run: rm -f paused
	I1212 20:12:06.122456  319249 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 20:12:06.127112  319249 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zg2v9" in "kube-system" namespace to be "Ready" or be gone ...
	W1212 20:12:08.131882  319249 pod_ready.go:104] pod "coredns-66bc5c9577-zg2v9" is not "Ready", error: <nil>
	W1212 20:12:05.798810  315481 pod_ready.go:104] pod "coredns-66bc5c9577-8wnb6" is not "Ready", error: <nil>
	W1212 20:12:08.295339  315481 pod_ready.go:104] pod "coredns-66bc5c9577-8wnb6" is not "Ready", error: <nil>
	W1212 20:12:08.392839  312743 node_ready.go:57] node "kindnet-789448" has "Ready":"False" status (will retry)
	W1212 20:12:10.892637  312743 node_ready.go:57] node "kindnet-789448" has "Ready":"False" status (will retry)
	I1212 20:12:11.391910  312743 node_ready.go:49] node "kindnet-789448" is "Ready"
	I1212 20:12:11.391938  312743 node_ready.go:38] duration metric: took 11.503350111s for node "kindnet-789448" to be "Ready" ...
	I1212 20:12:11.391955  312743 api_server.go:52] waiting for apiserver process to appear ...
	I1212 20:12:11.392006  312743 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:12:11.409047  312743 api_server.go:72] duration metric: took 11.983405525s to wait for apiserver process to appear ...
	I1212 20:12:11.409075  312743 api_server.go:88] waiting for apiserver healthz status ...
	I1212 20:12:11.409096  312743 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 20:12:11.414601  312743 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1212 20:12:11.416214  312743 api_server.go:141] control plane version: v1.34.2
	I1212 20:12:11.416241  312743 api_server.go:131] duration metric: took 7.158358ms to wait for apiserver health ...
	I1212 20:12:11.416252  312743 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 20:12:11.420435  312743 system_pods.go:59] 8 kube-system pods found
	I1212 20:12:11.420478  312743 system_pods.go:61] "coredns-66bc5c9577-6jhfx" [88340ae1-f626-4c10-aad7-d44d656437c4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 20:12:11.420491  312743 system_pods.go:61] "etcd-kindnet-789448" [d14ad3a9-9e69-4cd5-9e96-8ca5d90fdcc5] Running
	I1212 20:12:11.420503  312743 system_pods.go:61] "kindnet-jr4mb" [2e3f7b03-aedd-4ac5-a470-92c922e7facf] Running
	I1212 20:12:11.420509  312743 system_pods.go:61] "kube-apiserver-kindnet-789448" [9330e553-0ea8-4342-85fd-1d4ff1af7f9c] Running
	I1212 20:12:11.420515  312743 system_pods.go:61] "kube-controller-manager-kindnet-789448" [fe794ed6-f3f4-4bbd-9327-ba6129157f0b] Running
	I1212 20:12:11.420521  312743 system_pods.go:61] "kube-proxy-fq86t" [dedb5e00-2ba2-4e3a-9060-02cf69ac8e30] Running
	I1212 20:12:11.420526  312743 system_pods.go:61] "kube-scheduler-kindnet-789448" [77d161a3-a828-4d8b-96f7-7bd30e6bf609] Running
	I1212 20:12:11.420533  312743 system_pods.go:61] "storage-provisioner" [c671d11e-a5a7-46f8-8250-635f4070bf92] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 20:12:11.420541  312743 system_pods.go:74] duration metric: took 4.282454ms to wait for pod list to return data ...
	I1212 20:12:11.420549  312743 default_sa.go:34] waiting for default service account to be created ...
	I1212 20:12:11.422986  312743 default_sa.go:45] found service account: "default"
	I1212 20:12:11.423005  312743 default_sa.go:55] duration metric: took 2.44913ms for default service account to be created ...
	I1212 20:12:11.423014  312743 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 20:12:08.823301  325830 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1212 20:12:08.823503  325830 start.go:159] libmachine.API.Create for "calico-789448" (driver="docker")
	I1212 20:12:08.823530  325830 client.go:173] LocalClient.Create starting
	I1212 20:12:08.823590  325830 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem
	I1212 20:12:08.823619  325830 main.go:143] libmachine: Decoding PEM data...
	I1212 20:12:08.823634  325830 main.go:143] libmachine: Parsing certificate...
	I1212 20:12:08.823689  325830 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem
	I1212 20:12:08.823707  325830 main.go:143] libmachine: Decoding PEM data...
	I1212 20:12:08.823719  325830 main.go:143] libmachine: Parsing certificate...
	I1212 20:12:08.824046  325830 cli_runner.go:164] Run: docker network inspect calico-789448 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 20:12:08.841250  325830 cli_runner.go:211] docker network inspect calico-789448 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 20:12:08.841353  325830 network_create.go:284] running [docker network inspect calico-789448] to gather additional debugging logs...
	I1212 20:12:08.841375  325830 cli_runner.go:164] Run: docker network inspect calico-789448
	W1212 20:12:08.858459  325830 cli_runner.go:211] docker network inspect calico-789448 returned with exit code 1
	I1212 20:12:08.858489  325830 network_create.go:287] error running [docker network inspect calico-789448]: docker network inspect calico-789448: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-789448 not found
	I1212 20:12:08.858505  325830 network_create.go:289] output of [docker network inspect calico-789448]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-789448 not found
	
	** /stderr **
	I1212 20:12:08.858636  325830 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:12:08.877531  325830 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-74442dadd84e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:ff:80:da:a9:72} reservation:<nil>}
	I1212 20:12:08.878458  325830 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-26148288ab51 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:82:49:cc:21:29:a7} reservation:<nil>}
	I1212 20:12:08.879486  325830 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3684d3b926aa IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2a:5e:c7:18:99:d2} reservation:<nil>}
	I1212 20:12:08.880236  325830 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c165baeec493 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ea:78:86:50:3b:d1} reservation:<nil>}
	I1212 20:12:08.881342  325830 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e83c80}
	I1212 20:12:08.881365  325830 network_create.go:124] attempt to create docker network calico-789448 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1212 20:12:08.881434  325830 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-789448 calico-789448
	I1212 20:12:08.930625  325830 network_create.go:108] docker network calico-789448 192.168.85.0/24 created
	I1212 20:12:08.930658  325830 kic.go:121] calculated static IP "192.168.85.2" for the "calico-789448" container
	I1212 20:12:08.930724  325830 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 20:12:08.953127  325830 cli_runner.go:164] Run: docker volume create calico-789448 --label name.minikube.sigs.k8s.io=calico-789448 --label created_by.minikube.sigs.k8s.io=true
	I1212 20:12:08.978653  325830 oci.go:103] Successfully created a docker volume calico-789448
	I1212 20:12:08.978793  325830 cli_runner.go:164] Run: docker run --rm --name calico-789448-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-789448 --entrypoint /usr/bin/test -v calico-789448:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -d /var/lib
	I1212 20:12:09.518414  325830 oci.go:107] Successfully prepared a docker volume calico-789448
	I1212 20:12:09.518497  325830 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:12:09.518510  325830 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 20:12:09.518582  325830 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-789448:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 20:12:11.520781  312743 system_pods.go:86] 8 kube-system pods found
	I1212 20:12:11.520824  312743 system_pods.go:89] "coredns-66bc5c9577-6jhfx" [88340ae1-f626-4c10-aad7-d44d656437c4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 20:12:11.520833  312743 system_pods.go:89] "etcd-kindnet-789448" [d14ad3a9-9e69-4cd5-9e96-8ca5d90fdcc5] Running
	I1212 20:12:11.520841  312743 system_pods.go:89] "kindnet-jr4mb" [2e3f7b03-aedd-4ac5-a470-92c922e7facf] Running
	I1212 20:12:11.520854  312743 system_pods.go:89] "kube-apiserver-kindnet-789448" [9330e553-0ea8-4342-85fd-1d4ff1af7f9c] Running
	I1212 20:12:11.520860  312743 system_pods.go:89] "kube-controller-manager-kindnet-789448" [fe794ed6-f3f4-4bbd-9327-ba6129157f0b] Running
	I1212 20:12:11.520866  312743 system_pods.go:89] "kube-proxy-fq86t" [dedb5e00-2ba2-4e3a-9060-02cf69ac8e30] Running
	I1212 20:12:11.520871  312743 system_pods.go:89] "kube-scheduler-kindnet-789448" [77d161a3-a828-4d8b-96f7-7bd30e6bf609] Running
	I1212 20:12:11.520880  312743 system_pods.go:89] "storage-provisioner" [c671d11e-a5a7-46f8-8250-635f4070bf92] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 20:12:11.520914  312743 retry.go:31] will retry after 191.610621ms: missing components: kube-dns
	I1212 20:12:11.720885  312743 system_pods.go:86] 8 kube-system pods found
	I1212 20:12:11.720923  312743 system_pods.go:89] "coredns-66bc5c9577-6jhfx" [88340ae1-f626-4c10-aad7-d44d656437c4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 20:12:11.720931  312743 system_pods.go:89] "etcd-kindnet-789448" [d14ad3a9-9e69-4cd5-9e96-8ca5d90fdcc5] Running
	I1212 20:12:11.720940  312743 system_pods.go:89] "kindnet-jr4mb" [2e3f7b03-aedd-4ac5-a470-92c922e7facf] Running
	I1212 20:12:11.720945  312743 system_pods.go:89] "kube-apiserver-kindnet-789448" [9330e553-0ea8-4342-85fd-1d4ff1af7f9c] Running
	I1212 20:12:11.720951  312743 system_pods.go:89] "kube-controller-manager-kindnet-789448" [fe794ed6-f3f4-4bbd-9327-ba6129157f0b] Running
	I1212 20:12:11.720956  312743 system_pods.go:89] "kube-proxy-fq86t" [dedb5e00-2ba2-4e3a-9060-02cf69ac8e30] Running
	I1212 20:12:11.720962  312743 system_pods.go:89] "kube-scheduler-kindnet-789448" [77d161a3-a828-4d8b-96f7-7bd30e6bf609] Running
	I1212 20:12:11.720975  312743 system_pods.go:89] "storage-provisioner" [c671d11e-a5a7-46f8-8250-635f4070bf92] Running
	I1212 20:12:11.720998  312743 retry.go:31] will retry after 294.52645ms: missing components: kube-dns
	I1212 20:12:12.020125  312743 system_pods.go:86] 8 kube-system pods found
	I1212 20:12:12.020153  312743 system_pods.go:89] "coredns-66bc5c9577-6jhfx" [88340ae1-f626-4c10-aad7-d44d656437c4] Running
	I1212 20:12:12.020159  312743 system_pods.go:89] "etcd-kindnet-789448" [d14ad3a9-9e69-4cd5-9e96-8ca5d90fdcc5] Running
	I1212 20:12:12.020163  312743 system_pods.go:89] "kindnet-jr4mb" [2e3f7b03-aedd-4ac5-a470-92c922e7facf] Running
	I1212 20:12:12.020167  312743 system_pods.go:89] "kube-apiserver-kindnet-789448" [9330e553-0ea8-4342-85fd-1d4ff1af7f9c] Running
	I1212 20:12:12.020170  312743 system_pods.go:89] "kube-controller-manager-kindnet-789448" [fe794ed6-f3f4-4bbd-9327-ba6129157f0b] Running
	I1212 20:12:12.020174  312743 system_pods.go:89] "kube-proxy-fq86t" [dedb5e00-2ba2-4e3a-9060-02cf69ac8e30] Running
	I1212 20:12:12.020180  312743 system_pods.go:89] "kube-scheduler-kindnet-789448" [77d161a3-a828-4d8b-96f7-7bd30e6bf609] Running
	I1212 20:12:12.020184  312743 system_pods.go:89] "storage-provisioner" [c671d11e-a5a7-46f8-8250-635f4070bf92] Running
	I1212 20:12:12.020193  312743 system_pods.go:126] duration metric: took 597.173096ms to wait for k8s-apps to be running ...
	I1212 20:12:12.020206  312743 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 20:12:12.020252  312743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:12:12.034755  312743 system_svc.go:56] duration metric: took 14.539491ms WaitForService to wait for kubelet
	I1212 20:12:12.034781  312743 kubeadm.go:587] duration metric: took 12.609143945s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 20:12:12.034811  312743 node_conditions.go:102] verifying NodePressure condition ...
	I1212 20:12:12.037821  312743 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1212 20:12:12.037849  312743 node_conditions.go:123] node cpu capacity is 8
	I1212 20:12:12.037876  312743 node_conditions.go:105] duration metric: took 3.054573ms to run NodePressure ...
	I1212 20:12:12.037900  312743 start.go:242] waiting for startup goroutines ...
	I1212 20:12:12.037914  312743 start.go:247] waiting for cluster config update ...
	I1212 20:12:12.037931  312743 start.go:256] writing updated cluster config ...
	I1212 20:12:12.067389  312743 ssh_runner.go:195] Run: rm -f paused
	I1212 20:12:12.072388  312743 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 20:12:12.076424  312743 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6jhfx" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:12.080756  312743 pod_ready.go:94] pod "coredns-66bc5c9577-6jhfx" is "Ready"
	I1212 20:12:12.080777  312743 pod_ready.go:86] duration metric: took 4.333416ms for pod "coredns-66bc5c9577-6jhfx" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:12.082871  312743 pod_ready.go:83] waiting for pod "etcd-kindnet-789448" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:12.086624  312743 pod_ready.go:94] pod "etcd-kindnet-789448" is "Ready"
	I1212 20:12:12.086641  312743 pod_ready.go:86] duration metric: took 3.750837ms for pod "etcd-kindnet-789448" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:12.088472  312743 pod_ready.go:83] waiting for pod "kube-apiserver-kindnet-789448" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:12.093346  312743 pod_ready.go:94] pod "kube-apiserver-kindnet-789448" is "Ready"
	I1212 20:12:12.093367  312743 pod_ready.go:86] duration metric: took 4.877638ms for pod "kube-apiserver-kindnet-789448" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:12.095405  312743 pod_ready.go:83] waiting for pod "kube-controller-manager-kindnet-789448" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:12.477382  312743 pod_ready.go:94] pod "kube-controller-manager-kindnet-789448" is "Ready"
	I1212 20:12:12.477412  312743 pod_ready.go:86] duration metric: took 381.985061ms for pod "kube-controller-manager-kindnet-789448" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:12.676940  312743 pod_ready.go:83] waiting for pod "kube-proxy-fq86t" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:13.076687  312743 pod_ready.go:94] pod "kube-proxy-fq86t" is "Ready"
	I1212 20:12:13.076709  312743 pod_ready.go:86] duration metric: took 399.749827ms for pod "kube-proxy-fq86t" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:13.276544  312743 pod_ready.go:83] waiting for pod "kube-scheduler-kindnet-789448" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:13.699369  312743 pod_ready.go:94] pod "kube-scheduler-kindnet-789448" is "Ready"
	I1212 20:12:13.699397  312743 pod_ready.go:86] duration metric: took 422.829219ms for pod "kube-scheduler-kindnet-789448" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:13.699412  312743 pod_ready.go:40] duration metric: took 1.626996152s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 20:12:13.756607  312743 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1212 20:12:13.814369  312743 out.go:179] * Done! kubectl is now configured to use "kindnet-789448" cluster and "default" namespace by default
	W1212 20:12:10.134796  319249 pod_ready.go:104] pod "coredns-66bc5c9577-zg2v9" is not "Ready", error: <nil>
	W1212 20:12:12.162083  319249 pod_ready.go:104] pod "coredns-66bc5c9577-zg2v9" is not "Ready", error: <nil>
	W1212 20:12:10.296202  315481 pod_ready.go:104] pod "coredns-66bc5c9577-8wnb6" is not "Ready", error: <nil>
	W1212 20:12:12.796139  315481 pod_ready.go:104] pod "coredns-66bc5c9577-8wnb6" is not "Ready", error: <nil>
	W1212 20:12:14.803072  315481 pod_ready.go:104] pod "coredns-66bc5c9577-8wnb6" is not "Ready", error: <nil>
	I1212 20:12:14.060749  325830 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-789448:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -I lz4 -xf /preloaded.tar -C /extractDir: (4.54212943s)
	I1212 20:12:14.060792  325830 kic.go:203] duration metric: took 4.542279248s to extract preloaded images to volume ...
	W1212 20:12:14.060889  325830 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1212 20:12:14.060924  325830 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1212 20:12:14.060962  325830 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 20:12:14.130383  325830 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-789448 --name calico-789448 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-789448 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-789448 --network calico-789448 --ip 192.168.85.2 --volume calico-789448:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138
	I1212 20:12:14.623459  325830 cli_runner.go:164] Run: docker container inspect calico-789448 --format={{.State.Running}}
	I1212 20:12:14.651212  325830 cli_runner.go:164] Run: docker container inspect calico-789448 --format={{.State.Status}}
	I1212 20:12:14.679648  325830 cli_runner.go:164] Run: docker exec calico-789448 stat /var/lib/dpkg/alternatives/iptables
	I1212 20:12:14.741833  325830 oci.go:144] the created container "calico-789448" has a running status.
	I1212 20:12:14.741869  325830 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22112-5703/.minikube/machines/calico-789448/id_rsa...
	I1212 20:12:14.834456  325830 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22112-5703/.minikube/machines/calico-789448/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 20:12:14.866879  325830 cli_runner.go:164] Run: docker container inspect calico-789448 --format={{.State.Status}}
	I1212 20:12:14.892440  325830 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 20:12:14.892462  325830 kic_runner.go:114] Args: [docker exec --privileged calico-789448 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 20:12:14.945230  325830 cli_runner.go:164] Run: docker container inspect calico-789448 --format={{.State.Status}}
	I1212 20:12:14.967565  325830 machine.go:94] provisionDockerMachine start ...
	I1212 20:12:14.967726  325830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-789448
	I1212 20:12:14.990216  325830 main.go:143] libmachine: Using SSH client type: native
	I1212 20:12:14.990571  325830 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1212 20:12:14.990589  325830 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 20:12:14.991293  325830 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33224->127.0.0.1:33119: read: connection reset by peer
	I1212 20:12:18.122742  325830 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-789448
	
	I1212 20:12:18.122772  325830 ubuntu.go:182] provisioning hostname "calico-789448"
	I1212 20:12:18.122832  325830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-789448
	I1212 20:12:18.142923  325830 main.go:143] libmachine: Using SSH client type: native
	I1212 20:12:18.143123  325830 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1212 20:12:18.143140  325830 main.go:143] libmachine: About to run SSH command:
	sudo hostname calico-789448 && echo "calico-789448" | sudo tee /etc/hostname
	I1212 20:12:18.282917  325830 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-789448
	
	I1212 20:12:18.282988  325830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-789448
	I1212 20:12:18.301179  325830 main.go:143] libmachine: Using SSH client type: native
	I1212 20:12:18.301425  325830 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1212 20:12:18.301449  325830 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-789448' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-789448/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-789448' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:12:18.429256  325830 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:12:18.429297  325830 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-5703/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-5703/.minikube}
	I1212 20:12:18.429323  325830 ubuntu.go:190] setting up certificates
	I1212 20:12:18.429334  325830 provision.go:84] configureAuth start
	I1212 20:12:18.429381  325830 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-789448
	I1212 20:12:18.447469  325830 provision.go:143] copyHostCerts
	I1212 20:12:18.447542  325830 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-5703/.minikube/ca.pem, removing ...
	I1212 20:12:18.447556  325830 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-5703/.minikube/ca.pem
	I1212 20:12:18.447640  325830 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-5703/.minikube/ca.pem (1078 bytes)
	I1212 20:12:18.448434  325830 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-5703/.minikube/cert.pem, removing ...
	I1212 20:12:18.448449  325830 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-5703/.minikube/cert.pem
	I1212 20:12:18.448501  325830 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-5703/.minikube/cert.pem (1123 bytes)
	I1212 20:12:18.448590  325830 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-5703/.minikube/key.pem, removing ...
	I1212 20:12:18.448598  325830 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-5703/.minikube/key.pem
	I1212 20:12:18.448623  325830 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-5703/.minikube/key.pem (1679 bytes)
	I1212 20:12:18.448678  325830 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-5703/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca-key.pem org=jenkins.calico-789448 san=[127.0.0.1 192.168.85.2 calico-789448 localhost minikube]
	I1212 20:12:18.599755  325830 provision.go:177] copyRemoteCerts
	I1212 20:12:18.599816  325830 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:12:18.599850  325830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-789448
	I1212 20:12:18.617796  325830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/calico-789448/id_rsa Username:docker}
	W1212 20:12:14.634841  319249 pod_ready.go:104] pod "coredns-66bc5c9577-zg2v9" is not "Ready", error: <nil>
	W1212 20:12:17.132221  319249 pod_ready.go:104] pod "coredns-66bc5c9577-zg2v9" is not "Ready", error: <nil>
	W1212 20:12:19.132369  319249 pod_ready.go:104] pod "coredns-66bc5c9577-zg2v9" is not "Ready", error: <nil>
	W1212 20:12:17.295027  315481 pod_ready.go:104] pod "coredns-66bc5c9577-8wnb6" is not "Ready", error: <nil>
	W1212 20:12:19.295580  315481 pod_ready.go:104] pod "coredns-66bc5c9577-8wnb6" is not "Ready", error: <nil>
	I1212 20:12:18.712789  325830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 20:12:18.730824  325830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1212 20:12:18.747634  325830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 20:12:18.764243  325830 provision.go:87] duration metric: took 334.888323ms to configureAuth
	I1212 20:12:18.764298  325830 ubuntu.go:206] setting minikube options for container-runtime
	I1212 20:12:18.764454  325830 config.go:182] Loaded profile config "calico-789448": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:12:18.764547  325830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-789448
	I1212 20:12:18.782656  325830 main.go:143] libmachine: Using SSH client type: native
	I1212 20:12:18.782866  325830 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1212 20:12:18.782882  325830 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 20:12:19.053559  325830 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 20:12:19.053579  325830 machine.go:97] duration metric: took 4.085992673s to provisionDockerMachine
	I1212 20:12:19.053591  325830 client.go:176] duration metric: took 10.23005055s to LocalClient.Create
	I1212 20:12:19.053607  325830 start.go:167] duration metric: took 10.230104576s to libmachine.API.Create "calico-789448"
	I1212 20:12:19.053617  325830 start.go:293] postStartSetup for "calico-789448" (driver="docker")
	I1212 20:12:19.053634  325830 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:12:19.053688  325830 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:12:19.053722  325830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-789448
	I1212 20:12:19.072089  325830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/calico-789448/id_rsa Username:docker}
	I1212 20:12:19.168787  325830 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:12:19.172107  325830 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 20:12:19.172132  325830 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 20:12:19.172141  325830 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-5703/.minikube/addons for local assets ...
	I1212 20:12:19.172191  325830 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-5703/.minikube/files for local assets ...
	I1212 20:12:19.172297  325830 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/ssl/certs/92542.pem -> 92542.pem in /etc/ssl/certs
	I1212 20:12:19.172420  325830 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 20:12:19.180070  325830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/ssl/certs/92542.pem --> /etc/ssl/certs/92542.pem (1708 bytes)
	I1212 20:12:19.199078  325830 start.go:296] duration metric: took 145.442426ms for postStartSetup
	I1212 20:12:19.199401  325830 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-789448
	I1212 20:12:19.216704  325830 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/config.json ...
	I1212 20:12:19.216996  325830 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:12:19.217066  325830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-789448
	I1212 20:12:19.233853  325830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/calico-789448/id_rsa Username:docker}
	I1212 20:12:19.324856  325830 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 20:12:19.329215  325830 start.go:128] duration metric: took 10.507632149s to createHost
	I1212 20:12:19.329243  325830 start.go:83] releasing machines lock for "calico-789448", held for 10.507746865s
	I1212 20:12:19.329340  325830 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-789448
	I1212 20:12:19.346424  325830 ssh_runner.go:195] Run: cat /version.json
	I1212 20:12:19.346468  325830 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 20:12:19.346544  325830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-789448
	I1212 20:12:19.346474  325830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-789448
	I1212 20:12:19.364944  325830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/calico-789448/id_rsa Username:docker}
	I1212 20:12:19.365325  325830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/calico-789448/id_rsa Username:docker}
	I1212 20:12:19.511680  325830 ssh_runner.go:195] Run: systemctl --version
	I1212 20:12:19.518008  325830 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 20:12:19.552443  325830 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 20:12:19.556849  325830 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:12:19.556921  325830 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:12:19.581714  325830 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 20:12:19.581731  325830 start.go:496] detecting cgroup driver to use...
	I1212 20:12:19.581759  325830 detect.go:190] detected "systemd" cgroup driver on host os
	I1212 20:12:19.581815  325830 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:12:19.597037  325830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:12:19.608550  325830 docker.go:218] disabling cri-docker service (if available) ...
	I1212 20:12:19.608593  325830 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 20:12:19.625473  325830 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 20:12:19.642593  325830 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 20:12:19.722717  325830 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 20:12:19.807953  325830 docker.go:234] disabling docker service ...
	I1212 20:12:19.808014  325830 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 20:12:19.825200  325830 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 20:12:19.837298  325830 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 20:12:19.927989  325830 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 20:12:20.030453  325830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:12:20.042651  325830 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:12:20.057112  325830 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 20:12:20.057160  325830 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:12:20.067377  325830 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1212 20:12:20.067431  325830 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:12:20.075889  325830 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:12:20.084354  325830 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:12:20.092419  325830 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 20:12:20.100181  325830 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:12:20.108312  325830 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:12:20.121366  325830 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:12:20.130194  325830 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 20:12:20.138310  325830 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 20:12:20.145624  325830 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:12:20.236400  325830 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 20:12:20.389308  325830 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 20:12:20.389386  325830 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 20:12:20.393255  325830 start.go:564] Will wait 60s for crictl version
	I1212 20:12:20.393322  325830 ssh_runner.go:195] Run: which crictl
	I1212 20:12:20.396868  325830 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 20:12:20.419924  325830 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 20:12:20.419993  325830 ssh_runner.go:195] Run: crio --version
	I1212 20:12:20.447743  325830 ssh_runner.go:195] Run: crio --version
	I1212 20:12:20.476900  325830 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 20:12:20.478071  325830 cli_runner.go:164] Run: docker network inspect calico-789448 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:12:20.495558  325830 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1212 20:12:20.499490  325830 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:12:20.509329  325830 kubeadm.go:884] updating cluster {Name:calico-789448 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-789448 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 20:12:20.509435  325830 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:12:20.509477  325830 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:12:20.539070  325830 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:12:20.539089  325830 crio.go:433] Images already preloaded, skipping extraction
	I1212 20:12:20.539137  325830 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:12:20.562199  325830 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:12:20.562217  325830 cache_images.go:86] Images are preloaded, skipping loading
	I1212 20:12:20.562226  325830 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1212 20:12:20.562328  325830 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=calico-789448 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:calico-789448 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1212 20:12:20.562399  325830 ssh_runner.go:195] Run: crio config
	I1212 20:12:20.613663  325830 cni.go:84] Creating CNI manager for "calico"
	I1212 20:12:20.613689  325830 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 20:12:20.613709  325830 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-789448 NodeName:calico-789448 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:12:20.613828  325830 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-789448"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:12:20.613887  325830 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 20:12:20.621850  325830 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 20:12:20.621920  325830 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 20:12:20.629793  325830 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1212 20:12:20.642971  325830 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 20:12:20.658515  325830 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1212 20:12:20.674228  325830 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1212 20:12:20.679039  325830 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:12:20.690319  325830 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:12:20.780637  325830 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:12:20.808472  325830 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448 for IP: 192.168.85.2
	I1212 20:12:20.808494  325830 certs.go:195] generating shared ca certs ...
	I1212 20:12:20.808511  325830 certs.go:227] acquiring lock for ca certs: {Name:mk2b2b547b64ae18c808f2dedd3d3cb8fa4a59ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:12:20.808662  325830 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-5703/.minikube/ca.key
	I1212 20:12:20.808741  325830 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.key
	I1212 20:12:20.808759  325830 certs.go:257] generating profile certs ...
	I1212 20:12:20.808839  325830 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/client.key
	I1212 20:12:20.808862  325830 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/client.crt with IP's: []
	I1212 20:12:20.925482  325830 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/client.crt ...
	I1212 20:12:20.925513  325830 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/client.crt: {Name:mke3f8a05c08a0f013522af18e439d9dfe68b020 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:12:20.925698  325830 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/client.key ...
	I1212 20:12:20.925711  325830 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/client.key: {Name:mk5b610dd2f6c24e4ffb591fa488fde141d4308f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:12:20.925819  325830 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/apiserver.key.77f45415
	I1212 20:12:20.925837  325830 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/apiserver.crt.77f45415 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1212 20:12:21.208673  325830 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/apiserver.crt.77f45415 ...
	I1212 20:12:21.208704  325830 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/apiserver.crt.77f45415: {Name:mk625f14f45e18fa35cc335dd91b9ee90b1d0dfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:12:21.208908  325830 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/apiserver.key.77f45415 ...
	I1212 20:12:21.208922  325830 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/apiserver.key.77f45415: {Name:mka3a1c2b57c52bece9d0390d3ab736647a50949 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:12:21.209025  325830 certs.go:382] copying /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/apiserver.crt.77f45415 -> /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/apiserver.crt
	I1212 20:12:21.209137  325830 certs.go:386] copying /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/apiserver.key.77f45415 -> /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/apiserver.key
	I1212 20:12:21.209220  325830 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/proxy-client.key
	I1212 20:12:21.209237  325830 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/proxy-client.crt with IP's: []
	I1212 20:12:21.341713  325830 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/proxy-client.crt ...
	I1212 20:12:21.341752  325830 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/proxy-client.crt: {Name:mk3638bcc071dc36fe5442929b6d1b03a9020518 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:12:21.341947  325830 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/proxy-client.key ...
	I1212 20:12:21.341966  325830 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/proxy-client.key: {Name:mkb62fd37a3fba0095570e8d5b1f7238ef95bc6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:12:21.342262  325830 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/9254.pem (1338 bytes)
	W1212 20:12:21.342340  325830 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-5703/.minikube/certs/9254_empty.pem, impossibly tiny 0 bytes
	I1212 20:12:21.342357  325830 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 20:12:21.342397  325830 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem (1078 bytes)
	I1212 20:12:21.342437  325830 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:12:21.342474  325830 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/key.pem (1679 bytes)
	I1212 20:12:21.342541  325830 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/ssl/certs/92542.pem (1708 bytes)
	I1212 20:12:21.343500  325830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:12:21.364804  325830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 20:12:21.382538  325830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:12:21.400652  325830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 20:12:21.423060  325830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1212 20:12:21.443361  325830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 20:12:21.462352  325830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:12:21.482470  325830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 20:12:21.503516  325830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:12:21.527142  325830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/certs/9254.pem --> /usr/share/ca-certificates/9254.pem (1338 bytes)
	I1212 20:12:21.549942  325830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/ssl/certs/92542.pem --> /usr/share/ca-certificates/92542.pem (1708 bytes)
	I1212 20:12:21.570764  325830 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 20:12:21.586510  325830 ssh_runner.go:195] Run: openssl version
	I1212 20:12:21.593628  325830 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:12:21.602577  325830 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 20:12:21.611379  325830 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:12:21.615896  325830 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:12:21.615941  325830 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:12:21.669150  325830 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 20:12:21.679564  325830 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 20:12:21.688090  325830 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9254.pem
	I1212 20:12:21.695982  325830 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9254.pem /etc/ssl/certs/9254.pem
	I1212 20:12:21.703499  325830 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9254.pem
	I1212 20:12:21.707644  325830 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:38 /usr/share/ca-certificates/9254.pem
	I1212 20:12:21.707694  325830 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9254.pem
	I1212 20:12:21.750453  325830 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 20:12:21.760915  325830 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9254.pem /etc/ssl/certs/51391683.0
	I1212 20:12:21.769555  325830 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/92542.pem
	I1212 20:12:21.777606  325830 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/92542.pem /etc/ssl/certs/92542.pem
	I1212 20:12:21.785204  325830 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92542.pem
	I1212 20:12:21.789301  325830 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:38 /usr/share/ca-certificates/92542.pem
	I1212 20:12:21.789353  325830 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92542.pem
	I1212 20:12:21.841982  325830 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 20:12:21.851919  325830 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/92542.pem /etc/ssl/certs/3ec20f2e.0
	I1212 20:12:21.861800  325830 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:12:21.866040  325830 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 20:12:21.866093  325830 kubeadm.go:401] StartCluster: {Name:calico-789448 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-789448 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:12:21.866189  325830 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:12:21.866247  325830 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:12:21.899656  325830 cri.go:89] found id: ""
	I1212 20:12:21.899721  325830 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 20:12:21.909813  325830 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 20:12:21.918531  325830 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 20:12:21.918579  325830 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:12:21.928050  325830 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 20:12:21.928067  325830 kubeadm.go:158] found existing configuration files:
	
	I1212 20:12:21.928110  325830 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 20:12:21.938015  325830 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 20:12:21.938075  325830 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 20:12:21.946673  325830 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 20:12:21.954763  325830 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 20:12:21.954813  325830 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:12:21.963567  325830 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 20:12:21.973126  325830 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 20:12:21.973173  325830 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:12:21.982328  325830 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 20:12:21.991630  325830 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 20:12:21.991677  325830 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:12:21.999921  325830 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 20:12:22.039340  325830 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1212 20:12:22.039441  325830 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 20:12:22.060783  325830 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 20:12:22.060889  325830 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1212 20:12:22.060948  325830 kubeadm.go:319] OS: Linux
	I1212 20:12:22.061024  325830 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 20:12:22.061118  325830 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 20:12:22.061195  325830 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 20:12:22.061269  325830 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 20:12:22.061370  325830 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 20:12:22.061439  325830 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 20:12:22.061510  325830 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 20:12:22.061571  325830 kubeadm.go:319] CGROUPS_IO: enabled
	I1212 20:12:22.127512  325830 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 20:12:22.127673  325830 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 20:12:22.127797  325830 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 20:12:22.137709  325830 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 20:12:22.139919  325830 out.go:252]   - Generating certificates and keys ...
	I1212 20:12:22.140020  325830 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 20:12:22.140130  325830 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 20:12:22.221827  325830 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 20:12:22.348357  325830 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 20:12:22.378226  325830 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 20:12:22.659720  325830 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 20:12:22.853235  325830 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 20:12:22.853421  325830 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [calico-789448 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1212 20:12:23.109106  325830 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 20:12:23.109334  325830 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [calico-789448 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1212 20:12:23.258493  325830 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 20:12:23.493245  325830 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 20:12:23.543423  325830 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 20:12:23.543583  325830 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	W1212 20:12:21.134248  319249 pod_ready.go:104] pod "coredns-66bc5c9577-zg2v9" is not "Ready", error: <nil>
	W1212 20:12:23.632623  319249 pod_ready.go:104] pod "coredns-66bc5c9577-zg2v9" is not "Ready", error: <nil>
	I1212 20:12:23.657558  325830 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 20:12:24.265466  325830 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 20:12:24.468471  325830 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 20:12:24.717803  325830 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 20:12:24.858348  325830 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 20:12:24.858883  325830 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 20:12:24.862293  325830 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1212 20:12:21.307719  315481 pod_ready.go:104] pod "coredns-66bc5c9577-8wnb6" is not "Ready", error: <nil>
	W1212 20:12:23.795776  315481 pod_ready.go:104] pod "coredns-66bc5c9577-8wnb6" is not "Ready", error: <nil>
	I1212 20:12:24.295462  315481 pod_ready.go:94] pod "coredns-66bc5c9577-8wnb6" is "Ready"
	I1212 20:12:24.295486  315481 pod_ready.go:86] duration metric: took 34.005133721s for pod "coredns-66bc5c9577-8wnb6" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:24.297616  315481 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-433034" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:24.301292  315481 pod_ready.go:94] pod "etcd-default-k8s-diff-port-433034" is "Ready"
	I1212 20:12:24.301311  315481 pod_ready.go:86] duration metric: took 3.673476ms for pod "etcd-default-k8s-diff-port-433034" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:24.303086  315481 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-433034" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:24.306436  315481 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-433034" is "Ready"
	I1212 20:12:24.306453  315481 pod_ready.go:86] duration metric: took 3.349778ms for pod "kube-apiserver-default-k8s-diff-port-433034" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:24.308309  315481 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-433034" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:24.494945  315481 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-433034" is "Ready"
	I1212 20:12:24.494969  315481 pod_ready.go:86] duration metric: took 186.639564ms for pod "kube-controller-manager-default-k8s-diff-port-433034" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:24.694822  315481 pod_ready.go:83] waiting for pod "kube-proxy-tmrrg" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:25.095498  315481 pod_ready.go:94] pod "kube-proxy-tmrrg" is "Ready"
	I1212 20:12:25.095526  315481 pod_ready.go:86] duration metric: took 400.673048ms for pod "kube-proxy-tmrrg" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:25.294095  315481 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-433034" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:25.693984  315481 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-433034" is "Ready"
	I1212 20:12:25.694009  315481 pod_ready.go:86] duration metric: took 399.891348ms for pod "kube-scheduler-default-k8s-diff-port-433034" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:25.694020  315481 pod_ready.go:40] duration metric: took 35.407272239s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 20:12:25.739661  315481 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1212 20:12:25.740989  315481 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-433034" cluster and "default" namespace by default
	I1212 20:12:24.863533  325830 out.go:252]   - Booting up control plane ...
	I1212 20:12:24.863624  325830 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 20:12:24.863733  325830 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 20:12:24.864337  325830 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 20:12:24.877901  325830 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:12:24.878053  325830 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 20:12:24.884143  325830 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 20:12:24.884439  325830 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:12:24.884499  325830 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 20:12:24.981216  325830 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 20:12:24.981388  325830 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 20:12:25.482746  325830 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.574022ms
	I1212 20:12:25.485771  325830 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1212 20:12:25.485912  325830 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1212 20:12:25.486044  325830 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1212 20:12:25.486123  325830 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1212 20:12:26.990739  325830 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.504837982s
	I1212 20:12:27.721889  325830 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.236017201s
	W1212 20:12:26.133069  319249 pod_ready.go:104] pod "coredns-66bc5c9577-zg2v9" is not "Ready", error: <nil>
	W1212 20:12:28.631833  319249 pod_ready.go:104] pod "coredns-66bc5c9577-zg2v9" is not "Ready", error: <nil>
	I1212 20:12:29.487737  325830 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001900496s
	I1212 20:12:29.506030  325830 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 20:12:29.517944  325830 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 20:12:29.527715  325830 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 20:12:29.528023  325830 kubeadm.go:319] [mark-control-plane] Marking the node calico-789448 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 20:12:29.535718  325830 kubeadm.go:319] [bootstrap-token] Using token: kxqoft.qy2o8c8ntm56u2md
	I1212 20:12:29.536990  325830 out.go:252]   - Configuring RBAC rules ...
	I1212 20:12:29.537149  325830 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 20:12:29.540418  325830 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 20:12:29.545302  325830 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 20:12:29.547972  325830 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 20:12:29.551343  325830 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 20:12:29.553931  325830 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 20:12:29.893545  325830 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 20:12:30.307381  325830 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1212 20:12:30.893461  325830 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1212 20:12:30.894248  325830 kubeadm.go:319] 
	I1212 20:12:30.894359  325830 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1212 20:12:30.894381  325830 kubeadm.go:319] 
	I1212 20:12:30.894447  325830 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1212 20:12:30.894454  325830 kubeadm.go:319] 
	I1212 20:12:30.894492  325830 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1212 20:12:30.894547  325830 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 20:12:30.894629  325830 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 20:12:30.894641  325830 kubeadm.go:319] 
	I1212 20:12:30.894722  325830 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1212 20:12:30.894733  325830 kubeadm.go:319] 
	I1212 20:12:30.894802  325830 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 20:12:30.894819  325830 kubeadm.go:319] 
	I1212 20:12:30.894901  325830 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1212 20:12:30.894999  325830 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 20:12:30.895093  325830 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 20:12:30.895102  325830 kubeadm.go:319] 
	I1212 20:12:30.895215  325830 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 20:12:30.895341  325830 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1212 20:12:30.895349  325830 kubeadm.go:319] 
	I1212 20:12:30.895462  325830 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token kxqoft.qy2o8c8ntm56u2md \
	I1212 20:12:30.895611  325830 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c49fb95e06ac971f3e9f3fbce39707c55b5d19bac31f7f0c0750449d5e9f38c \
	I1212 20:12:30.895657  325830 kubeadm.go:319] 	--control-plane 
	I1212 20:12:30.895666  325830 kubeadm.go:319] 
	I1212 20:12:30.895770  325830 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1212 20:12:30.895780  325830 kubeadm.go:319] 
	I1212 20:12:30.895904  325830 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token kxqoft.qy2o8c8ntm56u2md \
	I1212 20:12:30.896045  325830 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c49fb95e06ac971f3e9f3fbce39707c55b5d19bac31f7f0c0750449d5e9f38c 
	I1212 20:12:30.898247  325830 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1212 20:12:30.898382  325830 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 20:12:30.898412  325830 cni.go:84] Creating CNI manager for "calico"
	I1212 20:12:30.899946  325830 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I1212 20:12:30.901356  325830 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1212 20:12:30.901374  325830 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (329943 bytes)
	I1212 20:12:30.915011  325830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 20:12:31.622556  325830 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 20:12:31.622627  325830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:12:31.622657  325830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-789448 minikube.k8s.io/updated_at=2025_12_12T20_12_31_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300 minikube.k8s.io/name=calico-789448 minikube.k8s.io/primary=true
	I1212 20:12:31.635367  325830 ops.go:34] apiserver oom_adj: -16
	I1212 20:12:31.702030  325830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:12:32.203031  325830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:12:32.702568  325830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:12:33.202976  325830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1212 20:12:30.632320  319249 pod_ready.go:104] pod "coredns-66bc5c9577-zg2v9" is not "Ready", error: <nil>
	W1212 20:12:33.133505  319249 pod_ready.go:104] pod "coredns-66bc5c9577-zg2v9" is not "Ready", error: <nil>
	I1212 20:12:33.702519  325830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:12:34.203097  325830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:12:34.702454  325830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:12:35.202451  325830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:12:35.702744  325830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:12:35.773046  325830 kubeadm.go:1114] duration metric: took 4.150480167s to wait for elevateKubeSystemPrivileges
	I1212 20:12:35.773086  325830 kubeadm.go:403] duration metric: took 13.906995729s to StartCluster
	I1212 20:12:35.773108  325830 settings.go:142] acquiring lock: {Name:mk7b47e673a4fe47e1d03b804f21c4b8c19bdcb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:12:35.773184  325830 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 20:12:35.775200  325830 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/kubeconfig: {Name:mkf945a9079d724e4b1deff5cd122f8b2719dc1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:12:35.775498  325830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 20:12:35.775521  325830 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:12:35.775766  325830 config.go:182] Loaded profile config "calico-789448": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:12:35.775823  325830 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 20:12:35.775905  325830 addons.go:70] Setting storage-provisioner=true in profile "calico-789448"
	I1212 20:12:35.775917  325830 addons.go:70] Setting default-storageclass=true in profile "calico-789448"
	I1212 20:12:35.775924  325830 addons.go:239] Setting addon storage-provisioner=true in "calico-789448"
	I1212 20:12:35.775931  325830 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "calico-789448"
	I1212 20:12:35.775953  325830 host.go:66] Checking if "calico-789448" exists ...
	I1212 20:12:35.776388  325830 cli_runner.go:164] Run: docker container inspect calico-789448 --format={{.State.Status}}
	I1212 20:12:35.776538  325830 cli_runner.go:164] Run: docker container inspect calico-789448 --format={{.State.Status}}
	I1212 20:12:35.777057  325830 out.go:179] * Verifying Kubernetes components...
	I1212 20:12:35.782432  325830 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:12:35.803309  325830 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:12:35.804771  325830 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:12:35.804789  325830 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 20:12:35.804862  325830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-789448
	I1212 20:12:35.805138  325830 addons.go:239] Setting addon default-storageclass=true in "calico-789448"
	I1212 20:12:35.805181  325830 host.go:66] Checking if "calico-789448" exists ...
	I1212 20:12:35.805651  325830 cli_runner.go:164] Run: docker container inspect calico-789448 --format={{.State.Status}}
	I1212 20:12:35.844550  325830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/calico-789448/id_rsa Username:docker}
	I1212 20:12:35.845887  325830 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 20:12:35.845996  325830 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 20:12:35.846048  325830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-789448
	I1212 20:12:35.873390  325830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/calico-789448/id_rsa Username:docker}
	I1212 20:12:35.898903  325830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 20:12:35.943869  325830 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:12:35.971597  325830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:12:35.997202  325830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:12:36.108223  325830 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1212 20:12:36.109213  325830 node_ready.go:35] waiting up to 15m0s for node "calico-789448" to be "Ready" ...
	I1212 20:12:36.333997  325830 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1212 20:12:36.335261  325830 addons.go:530] duration metric: took 559.430472ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1212 20:12:36.612555  325830 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-789448" context rescaled to 1 replicas
	W1212 20:12:38.112969  325830 node_ready.go:57] node "calico-789448" has "Ready":"False" status (will retry)
	W1212 20:12:35.632416  319249 pod_ready.go:104] pod "coredns-66bc5c9577-zg2v9" is not "Ready", error: <nil>
	W1212 20:12:37.634465  319249 pod_ready.go:104] pod "coredns-66bc5c9577-zg2v9" is not "Ready", error: <nil>
	I1212 20:12:38.633317  319249 pod_ready.go:94] pod "coredns-66bc5c9577-zg2v9" is "Ready"
	I1212 20:12:38.633342  319249 pod_ready.go:86] duration metric: took 32.506205692s for pod "coredns-66bc5c9577-zg2v9" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:38.635811  319249 pod_ready.go:83] waiting for pod "etcd-embed-certs-399565" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:38.640519  319249 pod_ready.go:94] pod "etcd-embed-certs-399565" is "Ready"
	I1212 20:12:38.640545  319249 pod_ready.go:86] duration metric: took 4.711813ms for pod "etcd-embed-certs-399565" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:38.642977  319249 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-399565" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:38.647609  319249 pod_ready.go:94] pod "kube-apiserver-embed-certs-399565" is "Ready"
	I1212 20:12:38.647631  319249 pod_ready.go:86] duration metric: took 4.629982ms for pod "kube-apiserver-embed-certs-399565" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:38.650136  319249 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-399565" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:38.831373  319249 pod_ready.go:94] pod "kube-controller-manager-embed-certs-399565" is "Ready"
	I1212 20:12:38.831399  319249 pod_ready.go:86] duration metric: took 181.208829ms for pod "kube-controller-manager-embed-certs-399565" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:39.030500  319249 pod_ready.go:83] waiting for pod "kube-proxy-xgs9b" in "kube-system" namespace to be "Ready" or be gone ...
	
	
	==> CRI-O <==
	Dec 12 20:12:13 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:13.814335103Z" level=info msg="Started container" PID=1755 containerID=96083d25dfbe7b966a5499a1d5216fca08fbd221d4fba0ff578ef9ce330c7f23 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bjqrc/dashboard-metrics-scraper id=cea523b1-b6ab-4e89-8635-f9bf39edab52 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f80555dcff6d92d82fb765165756d067a8b44ae630f1ab0886e11f1e7fd87d83
	Dec 12 20:12:14 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:14.65035882Z" level=info msg="Removing container: c9f0decded2db11d84c800b48b50c3852609ab48ac1f3277061eded4dce22337" id=2ac522be-6dbb-45cf-b141-4d537123612e name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 20:12:14 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:14.667411747Z" level=info msg="Removed container c9f0decded2db11d84c800b48b50c3852609ab48ac1f3277061eded4dce22337: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bjqrc/dashboard-metrics-scraper" id=2ac522be-6dbb-45cf-b141-4d537123612e name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 20:12:20 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:20.663585528Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d0172bbc-ca1b-488b-aa57-f705dcd9c5a9 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:12:20 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:20.664634777Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9aa32e31-bc05-4de4-8ff1-ba86218288d9 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:12:20 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:20.665726864Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=14f30f74-c081-4c02-af21-77d396ab2f3b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:12:20 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:20.665872311Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:12:20 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:20.670853478Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:12:20 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:20.671058581Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/222ef159560d1fe09db3ac211a07926487a231fbd5132be4ad5d2ce4586675c4/merged/etc/passwd: no such file or directory"
	Dec 12 20:12:20 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:20.671097949Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/222ef159560d1fe09db3ac211a07926487a231fbd5132be4ad5d2ce4586675c4/merged/etc/group: no such file or directory"
	Dec 12 20:12:20 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:20.671472864Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:12:20 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:20.698547714Z" level=info msg="Created container a7a62a905d3ffa40362f751b92853933e880e605a4d8a54419cf3edf75e3bae8: kube-system/storage-provisioner/storage-provisioner" id=14f30f74-c081-4c02-af21-77d396ab2f3b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:12:20 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:20.699081315Z" level=info msg="Starting container: a7a62a905d3ffa40362f751b92853933e880e605a4d8a54419cf3edf75e3bae8" id=42981bcd-47e3-43cc-a727-32bb2d677f29 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 20:12:20 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:20.700982154Z" level=info msg="Started container" PID=1771 containerID=a7a62a905d3ffa40362f751b92853933e880e605a4d8a54419cf3edf75e3bae8 description=kube-system/storage-provisioner/storage-provisioner id=42981bcd-47e3-43cc-a727-32bb2d677f29 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7a3f642a50e5643f52b21f16623af2aacd868abf8b1f927b3c6c898510219dd3
	Dec 12 20:12:36 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:36.53833873Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=94957723-489f-4622-9399-498e8adc7dbd name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:12:36 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:36.539076636Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=6290cb5b-8661-4070-b20c-cbc0d65aac62 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:12:36 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:36.540057747Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bjqrc/dashboard-metrics-scraper" id=297a13b2-49c6-48aa-9898-3fce1c0efe07 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:12:36 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:36.540321236Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:12:36 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:36.546695188Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:12:36 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:36.547186197Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:12:36 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:36.570300776Z" level=info msg="Created container 4da0adad794baa248ec8fa2c45da542a6e93bc48468a007f657cbc26a9af53e9: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bjqrc/dashboard-metrics-scraper" id=297a13b2-49c6-48aa-9898-3fce1c0efe07 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:12:36 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:36.570993585Z" level=info msg="Starting container: 4da0adad794baa248ec8fa2c45da542a6e93bc48468a007f657cbc26a9af53e9" id=9e71737d-48bc-428e-8bd3-63aa70c89f2a name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 20:12:36 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:36.573171486Z" level=info msg="Started container" PID=1804 containerID=4da0adad794baa248ec8fa2c45da542a6e93bc48468a007f657cbc26a9af53e9 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bjqrc/dashboard-metrics-scraper id=9e71737d-48bc-428e-8bd3-63aa70c89f2a name=/runtime.v1.RuntimeService/StartContainer sandboxID=f80555dcff6d92d82fb765165756d067a8b44ae630f1ab0886e11f1e7fd87d83
	Dec 12 20:12:36 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:36.708920094Z" level=info msg="Removing container: 96083d25dfbe7b966a5499a1d5216fca08fbd221d4fba0ff578ef9ce330c7f23" id=2568bafb-9d8b-499c-a712-36e518fea7f9 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 20:12:36 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:36.721962892Z" level=info msg="Removed container 96083d25dfbe7b966a5499a1d5216fca08fbd221d4fba0ff578ef9ce330c7f23: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bjqrc/dashboard-metrics-scraper" id=2568bafb-9d8b-499c-a712-36e518fea7f9 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	4da0adad794ba       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           3 seconds ago       Exited              dashboard-metrics-scraper   3                   f80555dcff6d9       dashboard-metrics-scraper-6ffb444bf9-bjqrc             kubernetes-dashboard
	a7a62a905d3ff       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           19 seconds ago      Running             storage-provisioner         1                   7a3f642a50e56       storage-provisioner                                    kube-system
	57f988954b100       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   40 seconds ago      Running             kubernetes-dashboard        0                   968c430996154       kubernetes-dashboard-855c9754f9-nc8xd                  kubernetes-dashboard
	8219c3982e2a0       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           50 seconds ago      Running             coredns                     0                   4bdf4741e7ccb       coredns-66bc5c9577-8wnb6                               kube-system
	88edfb91cf4a0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   7a3f642a50e56       storage-provisioner                                    kube-system
	07e847a34e485       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   06fa0538bbaf8       busybox                                                default
	004cb4da4fc4a       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           50 seconds ago      Running             kube-proxy                  0                   3e75fb5970139       kube-proxy-tmrrg                                       kube-system
	8fc7fbe67821e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   42610116f316c       kindnet-w6vcl                                          kube-system
	ebeb10d45d10d       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           53 seconds ago      Running             kube-scheduler              0                   52d3944b9e66e       kube-scheduler-default-k8s-diff-port-433034            kube-system
	db085ca1f08eb       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           53 seconds ago      Running             kube-controller-manager     0                   d602cdf96a132       kube-controller-manager-default-k8s-diff-port-433034   kube-system
	6edfed35b96f2       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           53 seconds ago      Running             etcd                        0                   2ab6116eb294f       etcd-default-k8s-diff-port-433034                      kube-system
	261b4a83ad82d       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           53 seconds ago      Running             kube-apiserver              0                   9f98da21fa6f4       kube-apiserver-default-k8s-diff-port-433034            kube-system
	
	
	==> coredns [8219c3982e2a00c14a01654ae80b4054af8d527ae5e2473d70b4f644e062d30c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38069 - 27166 "HINFO IN 8740276210099719302.8441911480510563431. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.07583546s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-433034
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-433034
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300
	                    minikube.k8s.io/name=default-k8s-diff-port-433034
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T20_10_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 20:10:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-433034
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 20:12:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 20:12:29 +0000   Fri, 12 Dec 2025 20:10:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 20:12:29 +0000   Fri, 12 Dec 2025 20:10:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 20:12:29 +0000   Fri, 12 Dec 2025 20:10:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 20:12:29 +0000   Fri, 12 Dec 2025 20:11:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-433034
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c9831a2813dcaeb3cc17b596693b7bac
	  System UUID:                50f00333-6091-4f07-9dbc-f9936dd93205
	  Boot ID:                    50e046d9-33b0-4107-918f-f0a8d9513b10
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-66bc5c9577-8wnb6                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     102s
	  kube-system                 etcd-default-k8s-diff-port-433034                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         108s
	  kube-system                 kindnet-w6vcl                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      102s
	  kube-system                 kube-apiserver-default-k8s-diff-port-433034             250m (3%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-433034    200m (2%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-proxy-tmrrg                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 kube-scheduler-default-k8s-diff-port-433034             100m (1%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-bjqrc              0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-nc8xd                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 101s               kube-proxy       
	  Normal  Starting                 50s                kube-proxy       
	  Normal  NodeHasSufficientMemory  108s               kubelet          Node default-k8s-diff-port-433034 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    108s               kubelet          Node default-k8s-diff-port-433034 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     108s               kubelet          Node default-k8s-diff-port-433034 status is now: NodeHasSufficientPID
	  Normal  Starting                 108s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           103s               node-controller  Node default-k8s-diff-port-433034 event: Registered Node default-k8s-diff-port-433034 in Controller
	  Normal  NodeReady                91s                kubelet          Node default-k8s-diff-port-433034 status is now: NodeReady
	  Normal  Starting                 54s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)  kubelet          Node default-k8s-diff-port-433034 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)  kubelet          Node default-k8s-diff-port-433034 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)  kubelet          Node default-k8s-diff-port-433034 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                node-controller  Node default-k8s-diff-port-433034 event: Registered Node default-k8s-diff-port-433034 in Controller
	
	
	==> dmesg <==
	[  +0.086327] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026088] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.855140] kauditd_printk_skb: 47 callbacks suppressed
	[Dec12 19:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.016395] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023890] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023861] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023912] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023894] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +2.047754] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +4.031589] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +8.448131] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[Dec12 19:32] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[ +32.252714] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	
	
	==> etcd [6edfed35b96f2b2cbb9c54cdfbf440c89b72c03fc6a8947569d87276098e3d6e] <==
	{"level":"warn","ts":"2025-12-12T20:11:48.041532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.048972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.057341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.063700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.071226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.078320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.088735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.096059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.103897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.113502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.120306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.128683Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.135065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.142052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.149507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.156495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.163766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.169920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.176710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.185666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.198246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.211094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.217488Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.225212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.274632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42970","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:12:40 up 55 min,  0 user,  load average: 4.99, 3.15, 1.98
	Linux default-k8s-diff-port-433034 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8fc7fbe67821e88822d5c7655631e923b3b25aec05a2aec07ab906239a66992a] <==
	I1212 20:11:50.102960       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1212 20:11:50.103238       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1212 20:11:50.103442       1 main.go:148] setting mtu 1500 for CNI 
	I1212 20:11:50.103463       1 main.go:178] kindnetd IP family: "ipv4"
	I1212 20:11:50.103489       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-12T20:11:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1212 20:11:50.400122       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1212 20:11:50.400157       1 controller.go:381] "Waiting for informer caches to sync"
	I1212 20:11:50.400169       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1212 20:11:50.499305       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1212 20:11:50.790495       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1212 20:11:50.790540       1 metrics.go:72] Registering metrics
	I1212 20:11:50.790614       1 controller.go:711] "Syncing nftables rules"
	I1212 20:12:00.308405       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1212 20:12:00.308452       1 main.go:301] handling current node
	I1212 20:12:10.308927       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1212 20:12:10.308960       1 main.go:301] handling current node
	I1212 20:12:20.308456       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1212 20:12:20.308492       1 main.go:301] handling current node
	I1212 20:12:30.308426       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1212 20:12:30.308466       1 main.go:301] handling current node
	I1212 20:12:40.308007       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1212 20:12:40.308075       1 main.go:301] handling current node
	
	
	==> kube-apiserver [261b4a83ad82d0b63e1a0022703c411f8ddd6b03f5cbf86192b1fbce85653f93] <==
	I1212 20:11:48.758706       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 20:11:48.758721       1 cache.go:39] Caches are synced for autoregister controller
	I1212 20:11:48.758428       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1212 20:11:48.758925       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1212 20:11:48.758447       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1212 20:11:48.758460       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1212 20:11:48.758522       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1212 20:11:48.759324       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	E1212 20:11:48.762912       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1212 20:11:48.767199       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1212 20:11:48.795150       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1212 20:11:48.795248       1 policy_source.go:240] refreshing policies
	I1212 20:11:48.869450       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 20:11:49.053459       1 controller.go:667] quota admission added evaluator for: namespaces
	I1212 20:11:49.081212       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1212 20:11:49.097558       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 20:11:49.104586       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 20:11:49.110189       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1212 20:11:49.138387       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.173.145"}
	I1212 20:11:49.146340       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.128.98"}
	I1212 20:11:49.657540       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 20:11:52.353089       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 20:11:52.353136       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 20:11:52.552558       1 controller.go:667] quota admission added evaluator for: endpoints
	I1212 20:11:52.604564       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [db085ca1f08ebad1a72de68de42b83fd3c82a1ed0f265e1e74983cd5d88ae7f5] <==
	I1212 20:11:52.132994       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1212 20:11:52.149536       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1212 20:11:52.150738       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1212 20:11:52.150769       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1212 20:11:52.150802       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1212 20:11:52.150829       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1212 20:11:52.150821       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1212 20:11:52.150838       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1212 20:11:52.151362       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1212 20:11:52.152166       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1212 20:11:52.152186       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1212 20:11:52.152411       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1212 20:11:52.153597       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1212 20:11:52.153627       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1212 20:11:52.153671       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1212 20:11:52.153712       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1212 20:11:52.153726       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1212 20:11:52.153733       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1212 20:11:52.154941       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1212 20:11:52.154963       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 20:11:52.155016       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1212 20:11:52.161061       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1212 20:11:52.161063       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1212 20:11:52.161065       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1212 20:11:52.168502       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [004cb4da4fc4a28cc850784ef818eb4543cdf0dedee9670ac227fada50f58160] <==
	I1212 20:11:49.958779       1 server_linux.go:53] "Using iptables proxy"
	I1212 20:11:50.014234       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1212 20:11:50.114902       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1212 20:11:50.114949       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1212 20:11:50.115039       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 20:11:50.132926       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 20:11:50.132971       1 server_linux.go:132] "Using iptables Proxier"
	I1212 20:11:50.138877       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 20:11:50.139250       1 server.go:527] "Version info" version="v1.34.2"
	I1212 20:11:50.139288       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 20:11:50.140335       1 config.go:200] "Starting service config controller"
	I1212 20:11:50.140361       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 20:11:50.140445       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 20:11:50.140460       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 20:11:50.140486       1 config.go:106] "Starting endpoint slice config controller"
	I1212 20:11:50.140492       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 20:11:50.140516       1 config.go:309] "Starting node config controller"
	I1212 20:11:50.140544       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 20:11:50.140551       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 20:11:50.241085       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1212 20:11:50.241129       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1212 20:11:50.241108       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [ebeb10d45d10d2c655391f363492fcf212271217062b328f88a67404cc971388] <==
	I1212 20:11:47.960462       1 serving.go:386] Generated self-signed cert in-memory
	W1212 20:11:48.664442       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1212 20:11:48.664472       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 20:11:48.664484       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1212 20:11:48.664493       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1212 20:11:48.712743       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1212 20:11:48.715434       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 20:11:48.719653       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 20:11:48.719745       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 20:11:48.723758       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1212 20:11:48.723852       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1212 20:11:48.820140       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 12 20:11:56 default-k8s-diff-port-433034 kubelet[731]: I1212 20:11:56.589943     731 scope.go:117] "RemoveContainer" containerID="9581820bcbf52cb7fb5f6f7baadca377d7633f7b588f36eb5a56cd1ac7fba044"
	Dec 12 20:11:57 default-k8s-diff-port-433034 kubelet[731]: I1212 20:11:57.594857     731 scope.go:117] "RemoveContainer" containerID="9581820bcbf52cb7fb5f6f7baadca377d7633f7b588f36eb5a56cd1ac7fba044"
	Dec 12 20:11:57 default-k8s-diff-port-433034 kubelet[731]: I1212 20:11:57.595004     731 scope.go:117] "RemoveContainer" containerID="c9f0decded2db11d84c800b48b50c3852609ab48ac1f3277061eded4dce22337"
	Dec 12 20:11:57 default-k8s-diff-port-433034 kubelet[731]: E1212 20:11:57.595207     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bjqrc_kubernetes-dashboard(288d279c-693f-48d7-9c25-89ab0643312f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bjqrc" podUID="288d279c-693f-48d7-9c25-89ab0643312f"
	Dec 12 20:11:58 default-k8s-diff-port-433034 kubelet[731]: I1212 20:11:58.599231     731 scope.go:117] "RemoveContainer" containerID="c9f0decded2db11d84c800b48b50c3852609ab48ac1f3277061eded4dce22337"
	Dec 12 20:11:58 default-k8s-diff-port-433034 kubelet[731]: E1212 20:11:58.599809     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bjqrc_kubernetes-dashboard(288d279c-693f-48d7-9c25-89ab0643312f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bjqrc" podUID="288d279c-693f-48d7-9c25-89ab0643312f"
	Dec 12 20:12:00 default-k8s-diff-port-433034 kubelet[731]: I1212 20:12:00.621246     731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-nc8xd" podStartSLOduration=1.571442621 podStartE2EDuration="8.621222259s" podCreationTimestamp="2025-12-12 20:11:52 +0000 UTC" firstStartedPulling="2025-12-12 20:11:53.050683974 +0000 UTC m=+6.603201498" lastFinishedPulling="2025-12-12 20:12:00.100463622 +0000 UTC m=+13.652981136" observedRunningTime="2025-12-12 20:12:00.620944621 +0000 UTC m=+14.173462154" watchObservedRunningTime="2025-12-12 20:12:00.621222259 +0000 UTC m=+14.173739791"
	Dec 12 20:12:03 default-k8s-diff-port-433034 kubelet[731]: I1212 20:12:03.234161     731 scope.go:117] "RemoveContainer" containerID="c9f0decded2db11d84c800b48b50c3852609ab48ac1f3277061eded4dce22337"
	Dec 12 20:12:03 default-k8s-diff-port-433034 kubelet[731]: E1212 20:12:03.234427     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bjqrc_kubernetes-dashboard(288d279c-693f-48d7-9c25-89ab0643312f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bjqrc" podUID="288d279c-693f-48d7-9c25-89ab0643312f"
	Dec 12 20:12:13 default-k8s-diff-port-433034 kubelet[731]: I1212 20:12:13.537903     731 scope.go:117] "RemoveContainer" containerID="c9f0decded2db11d84c800b48b50c3852609ab48ac1f3277061eded4dce22337"
	Dec 12 20:12:14 default-k8s-diff-port-433034 kubelet[731]: I1212 20:12:14.647439     731 scope.go:117] "RemoveContainer" containerID="c9f0decded2db11d84c800b48b50c3852609ab48ac1f3277061eded4dce22337"
	Dec 12 20:12:14 default-k8s-diff-port-433034 kubelet[731]: I1212 20:12:14.648191     731 scope.go:117] "RemoveContainer" containerID="96083d25dfbe7b966a5499a1d5216fca08fbd221d4fba0ff578ef9ce330c7f23"
	Dec 12 20:12:14 default-k8s-diff-port-433034 kubelet[731]: E1212 20:12:14.648446     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bjqrc_kubernetes-dashboard(288d279c-693f-48d7-9c25-89ab0643312f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bjqrc" podUID="288d279c-693f-48d7-9c25-89ab0643312f"
	Dec 12 20:12:20 default-k8s-diff-port-433034 kubelet[731]: I1212 20:12:20.663155     731 scope.go:117] "RemoveContainer" containerID="88edfb91cf4a038250228f682d3173e413b779bc18321abc13169b2fa6574901"
	Dec 12 20:12:23 default-k8s-diff-port-433034 kubelet[731]: I1212 20:12:23.233695     731 scope.go:117] "RemoveContainer" containerID="96083d25dfbe7b966a5499a1d5216fca08fbd221d4fba0ff578ef9ce330c7f23"
	Dec 12 20:12:23 default-k8s-diff-port-433034 kubelet[731]: E1212 20:12:23.233928     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bjqrc_kubernetes-dashboard(288d279c-693f-48d7-9c25-89ab0643312f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bjqrc" podUID="288d279c-693f-48d7-9c25-89ab0643312f"
	Dec 12 20:12:36 default-k8s-diff-port-433034 kubelet[731]: I1212 20:12:36.537689     731 scope.go:117] "RemoveContainer" containerID="96083d25dfbe7b966a5499a1d5216fca08fbd221d4fba0ff578ef9ce330c7f23"
	Dec 12 20:12:36 default-k8s-diff-port-433034 kubelet[731]: I1212 20:12:36.707482     731 scope.go:117] "RemoveContainer" containerID="96083d25dfbe7b966a5499a1d5216fca08fbd221d4fba0ff578ef9ce330c7f23"
	Dec 12 20:12:36 default-k8s-diff-port-433034 kubelet[731]: I1212 20:12:36.707689     731 scope.go:117] "RemoveContainer" containerID="4da0adad794baa248ec8fa2c45da542a6e93bc48468a007f657cbc26a9af53e9"
	Dec 12 20:12:36 default-k8s-diff-port-433034 kubelet[731]: E1212 20:12:36.707890     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bjqrc_kubernetes-dashboard(288d279c-693f-48d7-9c25-89ab0643312f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bjqrc" podUID="288d279c-693f-48d7-9c25-89ab0643312f"
	Dec 12 20:12:37 default-k8s-diff-port-433034 kubelet[731]: I1212 20:12:37.964559     731 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 12 20:12:37 default-k8s-diff-port-433034 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 12 20:12:37 default-k8s-diff-port-433034 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 12 20:12:37 default-k8s-diff-port-433034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:12:37 default-k8s-diff-port-433034 systemd[1]: kubelet.service: Consumed 1.644s CPU time.
	
	
	==> kubernetes-dashboard [57f988954b100c48adbf59a94719d00d2d865dfab8b794ee332c80fa4b999f24] <==
	2025/12/12 20:12:00 Starting overwatch
	2025/12/12 20:12:00 Using namespace: kubernetes-dashboard
	2025/12/12 20:12:00 Using in-cluster config to connect to apiserver
	2025/12/12 20:12:00 Using secret token for csrf signing
	2025/12/12 20:12:00 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/12 20:12:00 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/12 20:12:00 Successful initial request to the apiserver, version: v1.34.2
	2025/12/12 20:12:00 Generating JWE encryption key
	2025/12/12 20:12:00 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/12 20:12:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/12 20:12:00 Initializing JWE encryption key from synchronized object
	2025/12/12 20:12:00 Creating in-cluster Sidecar client
	2025/12/12 20:12:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/12 20:12:00 Serving insecurely on HTTP port: 9090
	2025/12/12 20:12:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [88edfb91cf4a038250228f682d3173e413b779bc18321abc13169b2fa6574901] <==
	I1212 20:11:49.940609       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1212 20:12:19.943909       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a7a62a905d3ffa40362f751b92853933e880e605a4d8a54419cf3edf75e3bae8] <==
	I1212 20:12:20.713644       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 20:12:20.728771       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 20:12:20.728829       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1212 20:12:20.730858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:24.185168       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:28.445466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:32.044319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:35.097876       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:38.120757       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:38.126017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1212 20:12:38.126359       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 20:12:38.126549       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-433034_32f44bf4-c466-406e-ad46-99aa7830d33f!
	I1212 20:12:38.126579       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"993049af-7bb6-48bb-a2c2-ac2e2f6fa3e3", APIVersion:"v1", ResourceVersion:"640", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-433034_32f44bf4-c466-406e-ad46-99aa7830d33f became leader
	W1212 20:12:38.133677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:38.140731       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1212 20:12:38.226836       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-433034_32f44bf4-c466-406e-ad46-99aa7830d33f!
	W1212 20:12:40.145036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:40.152298       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-433034 -n default-k8s-diff-port-433034
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-433034 -n default-k8s-diff-port-433034: exit status 2 (357.462709ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-433034 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-433034
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-433034:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fd3264bb0f47e10f5ed2d4066292d5de07bb0c2a43611792ae5335ebb6ca06a7",
	        "Created": "2025-12-12T20:10:35.289904623Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 315795,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T20:11:40.412088111Z",
	            "FinishedAt": "2025-12-12T20:11:39.288294952Z"
	        },
	        "Image": "sha256:1ca69fb46d667873e297ff8975852d6be818eb624529e750e2b09ff0d3b0c367",
	        "ResolvConfPath": "/var/lib/docker/containers/fd3264bb0f47e10f5ed2d4066292d5de07bb0c2a43611792ae5335ebb6ca06a7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fd3264bb0f47e10f5ed2d4066292d5de07bb0c2a43611792ae5335ebb6ca06a7/hostname",
	        "HostsPath": "/var/lib/docker/containers/fd3264bb0f47e10f5ed2d4066292d5de07bb0c2a43611792ae5335ebb6ca06a7/hosts",
	        "LogPath": "/var/lib/docker/containers/fd3264bb0f47e10f5ed2d4066292d5de07bb0c2a43611792ae5335ebb6ca06a7/fd3264bb0f47e10f5ed2d4066292d5de07bb0c2a43611792ae5335ebb6ca06a7-json.log",
	        "Name": "/default-k8s-diff-port-433034",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-433034:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-433034",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fd3264bb0f47e10f5ed2d4066292d5de07bb0c2a43611792ae5335ebb6ca06a7",
	                "LowerDir": "/var/lib/docker/overlay2/16fd782b0a201b5189823b9a6925e35312bdc767755b365cfae5b065abc49f14-init/diff:/var/lib/docker/overlay2/87d8f1d6be832394c20ec55eb084b3f4a084ca786856584bd9e90cdb45432d3c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/16fd782b0a201b5189823b9a6925e35312bdc767755b365cfae5b065abc49f14/merged",
	                "UpperDir": "/var/lib/docker/overlay2/16fd782b0a201b5189823b9a6925e35312bdc767755b365cfae5b065abc49f14/diff",
	                "WorkDir": "/var/lib/docker/overlay2/16fd782b0a201b5189823b9a6925e35312bdc767755b365cfae5b065abc49f14/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-433034",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-433034/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-433034",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-433034",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-433034",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "fe8f411539cb5b958f101a44e69299365945f917a48886e77ad3390bdbf3230e",
	            "SandboxKey": "/var/run/docker/netns/fe8f411539cb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-433034": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9682428112d69f44e5ab9b8a0895f7f7dfc5a7aa9a7423b8acd6944687003e6d",
	                    "EndpointID": "f684e000dbeda3def689071789d225deac1bbbd4d1715137a1149064606143d7",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "26:29:1d:43:58:88",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-433034",
	                        "fd3264bb0f47"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-433034 -n default-k8s-diff-port-433034
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-433034 -n default-k8s-diff-port-433034: exit status 2 (350.798349ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-433034 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-433034 logs -n 25: (1.269955326s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                  ARGS                                                                  │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-789448 sudo systemctl cat crio --no-pager                                                                                      │ auto-789448                  │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p auto-789448 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                            │ auto-789448                  │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p auto-789448 sudo crio config                                                                                                        │ auto-789448                  │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ delete  │ -p auto-789448                                                                                                                         │ auto-789448                  │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ start   │ -p calico-789448 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio │ calico-789448                │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │                     │
	│ ssh     │ -p kindnet-789448 pgrep -a kubelet                                                                                                     │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p kindnet-789448 sudo cat /etc/nsswitch.conf                                                                                          │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p kindnet-789448 sudo cat /etc/hosts                                                                                                  │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p kindnet-789448 sudo cat /etc/resolv.conf                                                                                            │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p kindnet-789448 sudo crictl pods                                                                                                     │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p kindnet-789448 sudo crictl ps --all                                                                                                 │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ image   │ default-k8s-diff-port-433034 image list --format=json                                                                                  │ default-k8s-diff-port-433034 │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p kindnet-789448 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;                                                          │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ pause   │ -p default-k8s-diff-port-433034 --alsologtostderr -v=1                                                                                 │ default-k8s-diff-port-433034 │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │                     │
	│ ssh     │ -p kindnet-789448 sudo ip a s                                                                                                          │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p kindnet-789448 sudo ip r s                                                                                                          │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p kindnet-789448 sudo iptables-save                                                                                                   │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p kindnet-789448 sudo iptables -t nat -L -n -v                                                                                        │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p kindnet-789448 sudo systemctl status kubelet --all --full --no-pager                                                                │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p kindnet-789448 sudo systemctl cat kubelet --no-pager                                                                                │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p kindnet-789448 sudo journalctl -xeu kubelet --all --full --no-pager                                                                 │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p kindnet-789448 sudo cat /etc/kubernetes/kubelet.conf                                                                                │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p kindnet-789448 sudo cat /var/lib/kubelet/config.yaml                                                                                │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p kindnet-789448 sudo systemctl status docker --all --full --no-pager                                                                 │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │                     │
	│ ssh     │ -p kindnet-789448 sudo systemctl cat docker --no-pager                                                                                 │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:12:08
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:12:08.632296  325830 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:12:08.632582  325830 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:12:08.632596  325830 out.go:374] Setting ErrFile to fd 2...
	I1212 20:12:08.632603  325830 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:12:08.632824  325830 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 20:12:08.633259  325830 out.go:368] Setting JSON to false
	I1212 20:12:08.634466  325830 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3276,"bootTime":1765567053,"procs":343,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 20:12:08.634526  325830 start.go:143] virtualization: kvm guest
	I1212 20:12:08.636287  325830 out.go:179] * [calico-789448] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 20:12:08.637783  325830 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:12:08.637805  325830 notify.go:221] Checking for updates...
	I1212 20:12:08.640527  325830 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:12:08.641583  325830 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 20:12:08.642550  325830 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-5703/.minikube
	I1212 20:12:08.643531  325830 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 20:12:08.644515  325830 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:12:08.646531  325830 config.go:182] Loaded profile config "default-k8s-diff-port-433034": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:12:08.646657  325830 config.go:182] Loaded profile config "embed-certs-399565": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:12:08.646775  325830 config.go:182] Loaded profile config "kindnet-789448": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:12:08.646891  325830 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:12:08.670016  325830 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1212 20:12:08.670145  325830 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:12:08.727837  325830 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-12 20:12:08.718115531 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 20:12:08.727925  325830 docker.go:319] overlay module found
	I1212 20:12:08.729872  325830 out.go:179] * Using the docker driver based on user configuration
	I1212 20:12:08.730902  325830 start.go:309] selected driver: docker
	I1212 20:12:08.730915  325830 start.go:927] validating driver "docker" against <nil>
	I1212 20:12:08.730925  325830 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:12:08.731443  325830 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:12:08.791953  325830 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-12 20:12:08.781383266 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 20:12:08.792158  325830 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 20:12:08.792425  325830 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 20:12:08.794964  325830 out.go:179] * Using Docker driver with root privileges
	I1212 20:12:08.796196  325830 cni.go:84] Creating CNI manager for "calico"
	I1212 20:12:08.796219  325830 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1212 20:12:08.796329  325830 start.go:353] cluster config:
	{Name:calico-789448 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-789448 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:12:08.797541  325830 out.go:179] * Starting "calico-789448" primary control-plane node in "calico-789448" cluster
	I1212 20:12:08.798589  325830 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 20:12:08.799683  325830 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 20:12:08.800742  325830 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:12:08.800775  325830 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1212 20:12:08.800794  325830 cache.go:65] Caching tarball of preloaded images
	I1212 20:12:08.800842  325830 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 20:12:08.800906  325830 preload.go:238] Found /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 20:12:08.800923  325830 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 20:12:08.801043  325830 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/config.json ...
	I1212 20:12:08.801081  325830 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/config.json: {Name:mk8011d9b30e95660856db3433c630354f571ce2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:12:08.821298  325830 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 20:12:08.821318  325830 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 20:12:08.821347  325830 cache.go:243] Successfully downloaded all kic artifacts
	I1212 20:12:08.821385  325830 start.go:360] acquireMachinesLock for calico-789448: {Name:mk7f96f34e4f60fdbf53c82f6cb4ee1f554e00e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:12:08.821483  325830 start.go:364] duration metric: took 78.792µs to acquireMachinesLock for "calico-789448"
	I1212 20:12:08.821510  325830 start.go:93] Provisioning new machine with config: &{Name:calico-789448 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-789448 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:12:08.821572  325830 start.go:125] createHost starting for "" (driver="docker")
	I1212 20:12:05.105685  319249 addons.go:530] duration metric: took 2.291229446s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1212 20:12:05.581405  319249 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1212 20:12:05.585818  319249 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 20:12:05.585845  319249 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 20:12:06.081383  319249 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1212 20:12:06.086030  319249 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1212 20:12:06.087196  319249 api_server.go:141] control plane version: v1.34.2
	I1212 20:12:06.087223  319249 api_server.go:131] duration metric: took 1.006598775s to wait for apiserver health ...
	I1212 20:12:06.087234  319249 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 20:12:06.091799  319249 system_pods.go:59] 8 kube-system pods found
	I1212 20:12:06.091840  319249 system_pods.go:61] "coredns-66bc5c9577-zg2v9" [8b0daa17-68a0-4f3f-b50c-114a8218c542] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 20:12:06.091853  319249 system_pods.go:61] "etcd-embed-certs-399565" [ba75b498-a50f-48ae-9e09-c928ba04794f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 20:12:06.091863  319249 system_pods.go:61] "kindnet-5fbmr" [6c2a5685-5864-4af2-a1ef-5f355fd2a95b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1212 20:12:06.091873  319249 system_pods.go:61] "kube-apiserver-embed-certs-399565" [8850ea17-2667-403a-af36-83cdefa2548a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 20:12:06.091881  319249 system_pods.go:61] "kube-controller-manager-embed-certs-399565" [5e04b62d-f4fd-4664-aee8-e9b0a4b015f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 20:12:06.091896  319249 system_pods.go:61] "kube-proxy-xgs9b" [82692b91-abfa-4ef0-915d-af7f57048d82] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 20:12:06.091904  319249 system_pods.go:61] "kube-scheduler-embed-certs-399565" [3f9b76ad-c6b0-4de4-86ad-2ca8b4fee658] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 20:12:06.091921  319249 system_pods.go:61] "storage-provisioner" [970ffc0a-f3a7-4981-a59e-f47762e9d53e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 20:12:06.091931  319249 system_pods.go:74] duration metric: took 4.690837ms to wait for pod list to return data ...
	I1212 20:12:06.091941  319249 default_sa.go:34] waiting for default service account to be created ...
	I1212 20:12:06.094724  319249 default_sa.go:45] found service account: "default"
	I1212 20:12:06.094742  319249 default_sa.go:55] duration metric: took 2.792846ms for default service account to be created ...
	I1212 20:12:06.094750  319249 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 20:12:06.097534  319249 system_pods.go:86] 8 kube-system pods found
	I1212 20:12:06.097562  319249 system_pods.go:89] "coredns-66bc5c9577-zg2v9" [8b0daa17-68a0-4f3f-b50c-114a8218c542] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 20:12:06.097575  319249 system_pods.go:89] "etcd-embed-certs-399565" [ba75b498-a50f-48ae-9e09-c928ba04794f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 20:12:06.097586  319249 system_pods.go:89] "kindnet-5fbmr" [6c2a5685-5864-4af2-a1ef-5f355fd2a95b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1212 20:12:06.097600  319249 system_pods.go:89] "kube-apiserver-embed-certs-399565" [8850ea17-2667-403a-af36-83cdefa2548a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 20:12:06.097614  319249 system_pods.go:89] "kube-controller-manager-embed-certs-399565" [5e04b62d-f4fd-4664-aee8-e9b0a4b015f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 20:12:06.097626  319249 system_pods.go:89] "kube-proxy-xgs9b" [82692b91-abfa-4ef0-915d-af7f57048d82] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 20:12:06.097634  319249 system_pods.go:89] "kube-scheduler-embed-certs-399565" [3f9b76ad-c6b0-4de4-86ad-2ca8b4fee658] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 20:12:06.097643  319249 system_pods.go:89] "storage-provisioner" [970ffc0a-f3a7-4981-a59e-f47762e9d53e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 20:12:06.097652  319249 system_pods.go:126] duration metric: took 2.895822ms to wait for k8s-apps to be running ...
	I1212 20:12:06.097664  319249 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 20:12:06.097708  319249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:12:06.114598  319249 system_svc.go:56] duration metric: took 16.926454ms WaitForService to wait for kubelet
	I1212 20:12:06.114624  319249 kubeadm.go:587] duration metric: took 3.300225537s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 20:12:06.114641  319249 node_conditions.go:102] verifying NodePressure condition ...
	I1212 20:12:06.117538  319249 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1212 20:12:06.117562  319249 node_conditions.go:123] node cpu capacity is 8
	I1212 20:12:06.117586  319249 node_conditions.go:105] duration metric: took 2.939227ms to run NodePressure ...
	I1212 20:12:06.117604  319249 start.go:242] waiting for startup goroutines ...
	I1212 20:12:06.117617  319249 start.go:247] waiting for cluster config update ...
	I1212 20:12:06.117634  319249 start.go:256] writing updated cluster config ...
	I1212 20:12:06.117913  319249 ssh_runner.go:195] Run: rm -f paused
	I1212 20:12:06.122456  319249 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 20:12:06.127112  319249 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zg2v9" in "kube-system" namespace to be "Ready" or be gone ...
	W1212 20:12:08.131882  319249 pod_ready.go:104] pod "coredns-66bc5c9577-zg2v9" is not "Ready", error: <nil>
	W1212 20:12:05.798810  315481 pod_ready.go:104] pod "coredns-66bc5c9577-8wnb6" is not "Ready", error: <nil>
	W1212 20:12:08.295339  315481 pod_ready.go:104] pod "coredns-66bc5c9577-8wnb6" is not "Ready", error: <nil>
	W1212 20:12:08.392839  312743 node_ready.go:57] node "kindnet-789448" has "Ready":"False" status (will retry)
	W1212 20:12:10.892637  312743 node_ready.go:57] node "kindnet-789448" has "Ready":"False" status (will retry)
	I1212 20:12:11.391910  312743 node_ready.go:49] node "kindnet-789448" is "Ready"
	I1212 20:12:11.391938  312743 node_ready.go:38] duration metric: took 11.503350111s for node "kindnet-789448" to be "Ready" ...
	I1212 20:12:11.391955  312743 api_server.go:52] waiting for apiserver process to appear ...
	I1212 20:12:11.392006  312743 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:12:11.409047  312743 api_server.go:72] duration metric: took 11.983405525s to wait for apiserver process to appear ...
	I1212 20:12:11.409075  312743 api_server.go:88] waiting for apiserver healthz status ...
	I1212 20:12:11.409096  312743 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 20:12:11.414601  312743 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1212 20:12:11.416214  312743 api_server.go:141] control plane version: v1.34.2
	I1212 20:12:11.416241  312743 api_server.go:131] duration metric: took 7.158358ms to wait for apiserver health ...
	I1212 20:12:11.416252  312743 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 20:12:11.420435  312743 system_pods.go:59] 8 kube-system pods found
	I1212 20:12:11.420478  312743 system_pods.go:61] "coredns-66bc5c9577-6jhfx" [88340ae1-f626-4c10-aad7-d44d656437c4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 20:12:11.420491  312743 system_pods.go:61] "etcd-kindnet-789448" [d14ad3a9-9e69-4cd5-9e96-8ca5d90fdcc5] Running
	I1212 20:12:11.420503  312743 system_pods.go:61] "kindnet-jr4mb" [2e3f7b03-aedd-4ac5-a470-92c922e7facf] Running
	I1212 20:12:11.420509  312743 system_pods.go:61] "kube-apiserver-kindnet-789448" [9330e553-0ea8-4342-85fd-1d4ff1af7f9c] Running
	I1212 20:12:11.420515  312743 system_pods.go:61] "kube-controller-manager-kindnet-789448" [fe794ed6-f3f4-4bbd-9327-ba6129157f0b] Running
	I1212 20:12:11.420521  312743 system_pods.go:61] "kube-proxy-fq86t" [dedb5e00-2ba2-4e3a-9060-02cf69ac8e30] Running
	I1212 20:12:11.420526  312743 system_pods.go:61] "kube-scheduler-kindnet-789448" [77d161a3-a828-4d8b-96f7-7bd30e6bf609] Running
	I1212 20:12:11.420533  312743 system_pods.go:61] "storage-provisioner" [c671d11e-a5a7-46f8-8250-635f4070bf92] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 20:12:11.420541  312743 system_pods.go:74] duration metric: took 4.282454ms to wait for pod list to return data ...
	I1212 20:12:11.420549  312743 default_sa.go:34] waiting for default service account to be created ...
	I1212 20:12:11.422986  312743 default_sa.go:45] found service account: "default"
	I1212 20:12:11.423005  312743 default_sa.go:55] duration metric: took 2.44913ms for default service account to be created ...
	I1212 20:12:11.423014  312743 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 20:12:08.823301  325830 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1212 20:12:08.823503  325830 start.go:159] libmachine.API.Create for "calico-789448" (driver="docker")
	I1212 20:12:08.823530  325830 client.go:173] LocalClient.Create starting
	I1212 20:12:08.823590  325830 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem
	I1212 20:12:08.823619  325830 main.go:143] libmachine: Decoding PEM data...
	I1212 20:12:08.823634  325830 main.go:143] libmachine: Parsing certificate...
	I1212 20:12:08.823689  325830 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem
	I1212 20:12:08.823707  325830 main.go:143] libmachine: Decoding PEM data...
	I1212 20:12:08.823719  325830 main.go:143] libmachine: Parsing certificate...
	I1212 20:12:08.824046  325830 cli_runner.go:164] Run: docker network inspect calico-789448 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 20:12:08.841250  325830 cli_runner.go:211] docker network inspect calico-789448 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 20:12:08.841353  325830 network_create.go:284] running [docker network inspect calico-789448] to gather additional debugging logs...
	I1212 20:12:08.841375  325830 cli_runner.go:164] Run: docker network inspect calico-789448
	W1212 20:12:08.858459  325830 cli_runner.go:211] docker network inspect calico-789448 returned with exit code 1
	I1212 20:12:08.858489  325830 network_create.go:287] error running [docker network inspect calico-789448]: docker network inspect calico-789448: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-789448 not found
	I1212 20:12:08.858505  325830 network_create.go:289] output of [docker network inspect calico-789448]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-789448 not found
	
	** /stderr **
	I1212 20:12:08.858636  325830 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:12:08.877531  325830 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-74442dadd84e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:ff:80:da:a9:72} reservation:<nil>}
	I1212 20:12:08.878458  325830 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-26148288ab51 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:82:49:cc:21:29:a7} reservation:<nil>}
	I1212 20:12:08.879486  325830 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3684d3b926aa IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2a:5e:c7:18:99:d2} reservation:<nil>}
	I1212 20:12:08.880236  325830 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c165baeec493 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ea:78:86:50:3b:d1} reservation:<nil>}
	I1212 20:12:08.881342  325830 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e83c80}
	I1212 20:12:08.881365  325830 network_create.go:124] attempt to create docker network calico-789448 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1212 20:12:08.881434  325830 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-789448 calico-789448
	I1212 20:12:08.930625  325830 network_create.go:108] docker network calico-789448 192.168.85.0/24 created
	I1212 20:12:08.930658  325830 kic.go:121] calculated static IP "192.168.85.2" for the "calico-789448" container
	I1212 20:12:08.930724  325830 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 20:12:08.953127  325830 cli_runner.go:164] Run: docker volume create calico-789448 --label name.minikube.sigs.k8s.io=calico-789448 --label created_by.minikube.sigs.k8s.io=true
	I1212 20:12:08.978653  325830 oci.go:103] Successfully created a docker volume calico-789448
	I1212 20:12:08.978793  325830 cli_runner.go:164] Run: docker run --rm --name calico-789448-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-789448 --entrypoint /usr/bin/test -v calico-789448:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -d /var/lib
	I1212 20:12:09.518414  325830 oci.go:107] Successfully prepared a docker volume calico-789448
	I1212 20:12:09.518497  325830 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:12:09.518510  325830 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 20:12:09.518582  325830 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-789448:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 20:12:11.520781  312743 system_pods.go:86] 8 kube-system pods found
	I1212 20:12:11.520824  312743 system_pods.go:89] "coredns-66bc5c9577-6jhfx" [88340ae1-f626-4c10-aad7-d44d656437c4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 20:12:11.520833  312743 system_pods.go:89] "etcd-kindnet-789448" [d14ad3a9-9e69-4cd5-9e96-8ca5d90fdcc5] Running
	I1212 20:12:11.520841  312743 system_pods.go:89] "kindnet-jr4mb" [2e3f7b03-aedd-4ac5-a470-92c922e7facf] Running
	I1212 20:12:11.520854  312743 system_pods.go:89] "kube-apiserver-kindnet-789448" [9330e553-0ea8-4342-85fd-1d4ff1af7f9c] Running
	I1212 20:12:11.520860  312743 system_pods.go:89] "kube-controller-manager-kindnet-789448" [fe794ed6-f3f4-4bbd-9327-ba6129157f0b] Running
	I1212 20:12:11.520866  312743 system_pods.go:89] "kube-proxy-fq86t" [dedb5e00-2ba2-4e3a-9060-02cf69ac8e30] Running
	I1212 20:12:11.520871  312743 system_pods.go:89] "kube-scheduler-kindnet-789448" [77d161a3-a828-4d8b-96f7-7bd30e6bf609] Running
	I1212 20:12:11.520880  312743 system_pods.go:89] "storage-provisioner" [c671d11e-a5a7-46f8-8250-635f4070bf92] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 20:12:11.520914  312743 retry.go:31] will retry after 191.610621ms: missing components: kube-dns
	I1212 20:12:11.720885  312743 system_pods.go:86] 8 kube-system pods found
	I1212 20:12:11.720923  312743 system_pods.go:89] "coredns-66bc5c9577-6jhfx" [88340ae1-f626-4c10-aad7-d44d656437c4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 20:12:11.720931  312743 system_pods.go:89] "etcd-kindnet-789448" [d14ad3a9-9e69-4cd5-9e96-8ca5d90fdcc5] Running
	I1212 20:12:11.720940  312743 system_pods.go:89] "kindnet-jr4mb" [2e3f7b03-aedd-4ac5-a470-92c922e7facf] Running
	I1212 20:12:11.720945  312743 system_pods.go:89] "kube-apiserver-kindnet-789448" [9330e553-0ea8-4342-85fd-1d4ff1af7f9c] Running
	I1212 20:12:11.720951  312743 system_pods.go:89] "kube-controller-manager-kindnet-789448" [fe794ed6-f3f4-4bbd-9327-ba6129157f0b] Running
	I1212 20:12:11.720956  312743 system_pods.go:89] "kube-proxy-fq86t" [dedb5e00-2ba2-4e3a-9060-02cf69ac8e30] Running
	I1212 20:12:11.720962  312743 system_pods.go:89] "kube-scheduler-kindnet-789448" [77d161a3-a828-4d8b-96f7-7bd30e6bf609] Running
	I1212 20:12:11.720975  312743 system_pods.go:89] "storage-provisioner" [c671d11e-a5a7-46f8-8250-635f4070bf92] Running
	I1212 20:12:11.720998  312743 retry.go:31] will retry after 294.52645ms: missing components: kube-dns
	I1212 20:12:12.020125  312743 system_pods.go:86] 8 kube-system pods found
	I1212 20:12:12.020153  312743 system_pods.go:89] "coredns-66bc5c9577-6jhfx" [88340ae1-f626-4c10-aad7-d44d656437c4] Running
	I1212 20:12:12.020159  312743 system_pods.go:89] "etcd-kindnet-789448" [d14ad3a9-9e69-4cd5-9e96-8ca5d90fdcc5] Running
	I1212 20:12:12.020163  312743 system_pods.go:89] "kindnet-jr4mb" [2e3f7b03-aedd-4ac5-a470-92c922e7facf] Running
	I1212 20:12:12.020167  312743 system_pods.go:89] "kube-apiserver-kindnet-789448" [9330e553-0ea8-4342-85fd-1d4ff1af7f9c] Running
	I1212 20:12:12.020170  312743 system_pods.go:89] "kube-controller-manager-kindnet-789448" [fe794ed6-f3f4-4bbd-9327-ba6129157f0b] Running
	I1212 20:12:12.020174  312743 system_pods.go:89] "kube-proxy-fq86t" [dedb5e00-2ba2-4e3a-9060-02cf69ac8e30] Running
	I1212 20:12:12.020180  312743 system_pods.go:89] "kube-scheduler-kindnet-789448" [77d161a3-a828-4d8b-96f7-7bd30e6bf609] Running
	I1212 20:12:12.020184  312743 system_pods.go:89] "storage-provisioner" [c671d11e-a5a7-46f8-8250-635f4070bf92] Running
	I1212 20:12:12.020193  312743 system_pods.go:126] duration metric: took 597.173096ms to wait for k8s-apps to be running ...
	I1212 20:12:12.020206  312743 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 20:12:12.020252  312743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:12:12.034755  312743 system_svc.go:56] duration metric: took 14.539491ms WaitForService to wait for kubelet
	I1212 20:12:12.034781  312743 kubeadm.go:587] duration metric: took 12.609143945s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 20:12:12.034811  312743 node_conditions.go:102] verifying NodePressure condition ...
	I1212 20:12:12.037821  312743 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1212 20:12:12.037849  312743 node_conditions.go:123] node cpu capacity is 8
	I1212 20:12:12.037876  312743 node_conditions.go:105] duration metric: took 3.054573ms to run NodePressure ...
	I1212 20:12:12.037900  312743 start.go:242] waiting for startup goroutines ...
	I1212 20:12:12.037914  312743 start.go:247] waiting for cluster config update ...
	I1212 20:12:12.037931  312743 start.go:256] writing updated cluster config ...
	I1212 20:12:12.067389  312743 ssh_runner.go:195] Run: rm -f paused
	I1212 20:12:12.072388  312743 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 20:12:12.076424  312743 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6jhfx" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:12.080756  312743 pod_ready.go:94] pod "coredns-66bc5c9577-6jhfx" is "Ready"
	I1212 20:12:12.080777  312743 pod_ready.go:86] duration metric: took 4.333416ms for pod "coredns-66bc5c9577-6jhfx" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:12.082871  312743 pod_ready.go:83] waiting for pod "etcd-kindnet-789448" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:12.086624  312743 pod_ready.go:94] pod "etcd-kindnet-789448" is "Ready"
	I1212 20:12:12.086641  312743 pod_ready.go:86] duration metric: took 3.750837ms for pod "etcd-kindnet-789448" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:12.088472  312743 pod_ready.go:83] waiting for pod "kube-apiserver-kindnet-789448" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:12.093346  312743 pod_ready.go:94] pod "kube-apiserver-kindnet-789448" is "Ready"
	I1212 20:12:12.093367  312743 pod_ready.go:86] duration metric: took 4.877638ms for pod "kube-apiserver-kindnet-789448" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:12.095405  312743 pod_ready.go:83] waiting for pod "kube-controller-manager-kindnet-789448" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:12.477382  312743 pod_ready.go:94] pod "kube-controller-manager-kindnet-789448" is "Ready"
	I1212 20:12:12.477412  312743 pod_ready.go:86] duration metric: took 381.985061ms for pod "kube-controller-manager-kindnet-789448" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:12.676940  312743 pod_ready.go:83] waiting for pod "kube-proxy-fq86t" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:13.076687  312743 pod_ready.go:94] pod "kube-proxy-fq86t" is "Ready"
	I1212 20:12:13.076709  312743 pod_ready.go:86] duration metric: took 399.749827ms for pod "kube-proxy-fq86t" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:13.276544  312743 pod_ready.go:83] waiting for pod "kube-scheduler-kindnet-789448" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:13.699369  312743 pod_ready.go:94] pod "kube-scheduler-kindnet-789448" is "Ready"
	I1212 20:12:13.699397  312743 pod_ready.go:86] duration metric: took 422.829219ms for pod "kube-scheduler-kindnet-789448" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:13.699412  312743 pod_ready.go:40] duration metric: took 1.626996152s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 20:12:13.756607  312743 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1212 20:12:13.814369  312743 out.go:179] * Done! kubectl is now configured to use "kindnet-789448" cluster and "default" namespace by default
	W1212 20:12:10.134796  319249 pod_ready.go:104] pod "coredns-66bc5c9577-zg2v9" is not "Ready", error: <nil>
	W1212 20:12:12.162083  319249 pod_ready.go:104] pod "coredns-66bc5c9577-zg2v9" is not "Ready", error: <nil>
	W1212 20:12:10.296202  315481 pod_ready.go:104] pod "coredns-66bc5c9577-8wnb6" is not "Ready", error: <nil>
	W1212 20:12:12.796139  315481 pod_ready.go:104] pod "coredns-66bc5c9577-8wnb6" is not "Ready", error: <nil>
	W1212 20:12:14.803072  315481 pod_ready.go:104] pod "coredns-66bc5c9577-8wnb6" is not "Ready", error: <nil>
	I1212 20:12:14.060749  325830 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-789448:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -I lz4 -xf /preloaded.tar -C /extractDir: (4.54212943s)
	I1212 20:12:14.060792  325830 kic.go:203] duration metric: took 4.542279248s to extract preloaded images to volume ...
	W1212 20:12:14.060889  325830 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1212 20:12:14.060924  325830 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1212 20:12:14.060962  325830 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 20:12:14.130383  325830 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-789448 --name calico-789448 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-789448 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-789448 --network calico-789448 --ip 192.168.85.2 --volume calico-789448:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138
	I1212 20:12:14.623459  325830 cli_runner.go:164] Run: docker container inspect calico-789448 --format={{.State.Running}}
	I1212 20:12:14.651212  325830 cli_runner.go:164] Run: docker container inspect calico-789448 --format={{.State.Status}}
	I1212 20:12:14.679648  325830 cli_runner.go:164] Run: docker exec calico-789448 stat /var/lib/dpkg/alternatives/iptables
	I1212 20:12:14.741833  325830 oci.go:144] the created container "calico-789448" has a running status.
	I1212 20:12:14.741869  325830 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22112-5703/.minikube/machines/calico-789448/id_rsa...
	I1212 20:12:14.834456  325830 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22112-5703/.minikube/machines/calico-789448/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 20:12:14.866879  325830 cli_runner.go:164] Run: docker container inspect calico-789448 --format={{.State.Status}}
	I1212 20:12:14.892440  325830 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 20:12:14.892462  325830 kic_runner.go:114] Args: [docker exec --privileged calico-789448 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 20:12:14.945230  325830 cli_runner.go:164] Run: docker container inspect calico-789448 --format={{.State.Status}}
	I1212 20:12:14.967565  325830 machine.go:94] provisionDockerMachine start ...
	I1212 20:12:14.967726  325830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-789448
	I1212 20:12:14.990216  325830 main.go:143] libmachine: Using SSH client type: native
	I1212 20:12:14.990571  325830 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1212 20:12:14.990589  325830 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 20:12:14.991293  325830 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33224->127.0.0.1:33119: read: connection reset by peer
	I1212 20:12:18.122742  325830 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-789448
	
	I1212 20:12:18.122772  325830 ubuntu.go:182] provisioning hostname "calico-789448"
	I1212 20:12:18.122832  325830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-789448
	I1212 20:12:18.142923  325830 main.go:143] libmachine: Using SSH client type: native
	I1212 20:12:18.143123  325830 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1212 20:12:18.143140  325830 main.go:143] libmachine: About to run SSH command:
	sudo hostname calico-789448 && echo "calico-789448" | sudo tee /etc/hostname
	I1212 20:12:18.282917  325830 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-789448
	
	I1212 20:12:18.282988  325830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-789448
	I1212 20:12:18.301179  325830 main.go:143] libmachine: Using SSH client type: native
	I1212 20:12:18.301425  325830 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1212 20:12:18.301449  325830 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-789448' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-789448/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-789448' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:12:18.429256  325830 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:12:18.429297  325830 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-5703/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-5703/.minikube}
	I1212 20:12:18.429323  325830 ubuntu.go:190] setting up certificates
	I1212 20:12:18.429334  325830 provision.go:84] configureAuth start
	I1212 20:12:18.429381  325830 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-789448
	I1212 20:12:18.447469  325830 provision.go:143] copyHostCerts
	I1212 20:12:18.447542  325830 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-5703/.minikube/ca.pem, removing ...
	I1212 20:12:18.447556  325830 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-5703/.minikube/ca.pem
	I1212 20:12:18.447640  325830 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-5703/.minikube/ca.pem (1078 bytes)
	I1212 20:12:18.448434  325830 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-5703/.minikube/cert.pem, removing ...
	I1212 20:12:18.448449  325830 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-5703/.minikube/cert.pem
	I1212 20:12:18.448501  325830 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-5703/.minikube/cert.pem (1123 bytes)
	I1212 20:12:18.448590  325830 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-5703/.minikube/key.pem, removing ...
	I1212 20:12:18.448598  325830 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-5703/.minikube/key.pem
	I1212 20:12:18.448623  325830 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-5703/.minikube/key.pem (1679 bytes)
	I1212 20:12:18.448678  325830 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-5703/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca-key.pem org=jenkins.calico-789448 san=[127.0.0.1 192.168.85.2 calico-789448 localhost minikube]
	I1212 20:12:18.599755  325830 provision.go:177] copyRemoteCerts
	I1212 20:12:18.599816  325830 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:12:18.599850  325830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-789448
	I1212 20:12:18.617796  325830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/calico-789448/id_rsa Username:docker}
	W1212 20:12:14.634841  319249 pod_ready.go:104] pod "coredns-66bc5c9577-zg2v9" is not "Ready", error: <nil>
	W1212 20:12:17.132221  319249 pod_ready.go:104] pod "coredns-66bc5c9577-zg2v9" is not "Ready", error: <nil>
	W1212 20:12:19.132369  319249 pod_ready.go:104] pod "coredns-66bc5c9577-zg2v9" is not "Ready", error: <nil>
	W1212 20:12:17.295027  315481 pod_ready.go:104] pod "coredns-66bc5c9577-8wnb6" is not "Ready", error: <nil>
	W1212 20:12:19.295580  315481 pod_ready.go:104] pod "coredns-66bc5c9577-8wnb6" is not "Ready", error: <nil>
	I1212 20:12:18.712789  325830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 20:12:18.730824  325830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1212 20:12:18.747634  325830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 20:12:18.764243  325830 provision.go:87] duration metric: took 334.888323ms to configureAuth
	I1212 20:12:18.764298  325830 ubuntu.go:206] setting minikube options for container-runtime
	I1212 20:12:18.764454  325830 config.go:182] Loaded profile config "calico-789448": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:12:18.764547  325830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-789448
	I1212 20:12:18.782656  325830 main.go:143] libmachine: Using SSH client type: native
	I1212 20:12:18.782866  325830 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1212 20:12:18.782882  325830 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 20:12:19.053559  325830 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 20:12:19.053579  325830 machine.go:97] duration metric: took 4.085992673s to provisionDockerMachine
	I1212 20:12:19.053591  325830 client.go:176] duration metric: took 10.23005055s to LocalClient.Create
	I1212 20:12:19.053607  325830 start.go:167] duration metric: took 10.230104576s to libmachine.API.Create "calico-789448"
	I1212 20:12:19.053617  325830 start.go:293] postStartSetup for "calico-789448" (driver="docker")
	I1212 20:12:19.053634  325830 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:12:19.053688  325830 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:12:19.053722  325830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-789448
	I1212 20:12:19.072089  325830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/calico-789448/id_rsa Username:docker}
	I1212 20:12:19.168787  325830 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:12:19.172107  325830 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 20:12:19.172132  325830 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 20:12:19.172141  325830 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-5703/.minikube/addons for local assets ...
	I1212 20:12:19.172191  325830 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-5703/.minikube/files for local assets ...
	I1212 20:12:19.172297  325830 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/ssl/certs/92542.pem -> 92542.pem in /etc/ssl/certs
	I1212 20:12:19.172420  325830 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 20:12:19.180070  325830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/ssl/certs/92542.pem --> /etc/ssl/certs/92542.pem (1708 bytes)
	I1212 20:12:19.199078  325830 start.go:296] duration metric: took 145.442426ms for postStartSetup
	I1212 20:12:19.199401  325830 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-789448
	I1212 20:12:19.216704  325830 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/config.json ...
	I1212 20:12:19.216996  325830 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:12:19.217066  325830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-789448
	I1212 20:12:19.233853  325830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/calico-789448/id_rsa Username:docker}
	I1212 20:12:19.324856  325830 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 20:12:19.329215  325830 start.go:128] duration metric: took 10.507632149s to createHost
	I1212 20:12:19.329243  325830 start.go:83] releasing machines lock for "calico-789448", held for 10.507746865s
	I1212 20:12:19.329340  325830 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-789448
	I1212 20:12:19.346424  325830 ssh_runner.go:195] Run: cat /version.json
	I1212 20:12:19.346468  325830 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 20:12:19.346544  325830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-789448
	I1212 20:12:19.346474  325830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-789448
	I1212 20:12:19.364944  325830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/calico-789448/id_rsa Username:docker}
	I1212 20:12:19.365325  325830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/calico-789448/id_rsa Username:docker}
	I1212 20:12:19.511680  325830 ssh_runner.go:195] Run: systemctl --version
	I1212 20:12:19.518008  325830 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 20:12:19.552443  325830 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 20:12:19.556849  325830 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:12:19.556921  325830 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:12:19.581714  325830 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 20:12:19.581731  325830 start.go:496] detecting cgroup driver to use...
	I1212 20:12:19.581759  325830 detect.go:190] detected "systemd" cgroup driver on host os
	I1212 20:12:19.581815  325830 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:12:19.597037  325830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:12:19.608550  325830 docker.go:218] disabling cri-docker service (if available) ...
	I1212 20:12:19.608593  325830 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 20:12:19.625473  325830 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 20:12:19.642593  325830 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 20:12:19.722717  325830 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 20:12:19.807953  325830 docker.go:234] disabling docker service ...
	I1212 20:12:19.808014  325830 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 20:12:19.825200  325830 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 20:12:19.837298  325830 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 20:12:19.927989  325830 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 20:12:20.030453  325830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:12:20.042651  325830 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:12:20.057112  325830 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 20:12:20.057160  325830 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:12:20.067377  325830 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1212 20:12:20.067431  325830 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:12:20.075889  325830 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:12:20.084354  325830 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:12:20.092419  325830 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 20:12:20.100181  325830 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:12:20.108312  325830 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:12:20.121366  325830 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:12:20.130194  325830 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 20:12:20.138310  325830 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 20:12:20.145624  325830 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:12:20.236400  325830 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 20:12:20.389308  325830 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 20:12:20.389386  325830 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 20:12:20.393255  325830 start.go:564] Will wait 60s for crictl version
	I1212 20:12:20.393322  325830 ssh_runner.go:195] Run: which crictl
	I1212 20:12:20.396868  325830 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 20:12:20.419924  325830 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 20:12:20.419993  325830 ssh_runner.go:195] Run: crio --version
	I1212 20:12:20.447743  325830 ssh_runner.go:195] Run: crio --version
	I1212 20:12:20.476900  325830 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 20:12:20.478071  325830 cli_runner.go:164] Run: docker network inspect calico-789448 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:12:20.495558  325830 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1212 20:12:20.499490  325830 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:12:20.509329  325830 kubeadm.go:884] updating cluster {Name:calico-789448 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-789448 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 20:12:20.509435  325830 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:12:20.509477  325830 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:12:20.539070  325830 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:12:20.539089  325830 crio.go:433] Images already preloaded, skipping extraction
	I1212 20:12:20.539137  325830 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:12:20.562199  325830 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:12:20.562217  325830 cache_images.go:86] Images are preloaded, skipping loading
	I1212 20:12:20.562226  325830 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1212 20:12:20.562328  325830 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=calico-789448 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:calico-789448 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1212 20:12:20.562399  325830 ssh_runner.go:195] Run: crio config
	I1212 20:12:20.613663  325830 cni.go:84] Creating CNI manager for "calico"
	I1212 20:12:20.613689  325830 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 20:12:20.613709  325830 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-789448 NodeName:calico-789448 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:12:20.613828  325830 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-789448"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:12:20.613887  325830 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 20:12:20.621850  325830 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 20:12:20.621920  325830 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 20:12:20.629793  325830 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1212 20:12:20.642971  325830 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 20:12:20.658515  325830 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1212 20:12:20.674228  325830 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1212 20:12:20.679039  325830 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:12:20.690319  325830 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:12:20.780637  325830 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:12:20.808472  325830 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448 for IP: 192.168.85.2
	I1212 20:12:20.808494  325830 certs.go:195] generating shared ca certs ...
	I1212 20:12:20.808511  325830 certs.go:227] acquiring lock for ca certs: {Name:mk2b2b547b64ae18c808f2dedd3d3cb8fa4a59ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:12:20.808662  325830 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-5703/.minikube/ca.key
	I1212 20:12:20.808741  325830 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.key
	I1212 20:12:20.808759  325830 certs.go:257] generating profile certs ...
	I1212 20:12:20.808839  325830 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/client.key
	I1212 20:12:20.808862  325830 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/client.crt with IP's: []
	I1212 20:12:20.925482  325830 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/client.crt ...
	I1212 20:12:20.925513  325830 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/client.crt: {Name:mke3f8a05c08a0f013522af18e439d9dfe68b020 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:12:20.925698  325830 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/client.key ...
	I1212 20:12:20.925711  325830 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/client.key: {Name:mk5b610dd2f6c24e4ffb591fa488fde141d4308f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:12:20.925819  325830 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/apiserver.key.77f45415
	I1212 20:12:20.925837  325830 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/apiserver.crt.77f45415 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1212 20:12:21.208673  325830 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/apiserver.crt.77f45415 ...
	I1212 20:12:21.208704  325830 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/apiserver.crt.77f45415: {Name:mk625f14f45e18fa35cc335dd91b9ee90b1d0dfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:12:21.208908  325830 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/apiserver.key.77f45415 ...
	I1212 20:12:21.208922  325830 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/apiserver.key.77f45415: {Name:mka3a1c2b57c52bece9d0390d3ab736647a50949 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:12:21.209025  325830 certs.go:382] copying /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/apiserver.crt.77f45415 -> /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/apiserver.crt
	I1212 20:12:21.209137  325830 certs.go:386] copying /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/apiserver.key.77f45415 -> /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/apiserver.key
	I1212 20:12:21.209220  325830 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/proxy-client.key
	I1212 20:12:21.209237  325830 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/proxy-client.crt with IP's: []
	I1212 20:12:21.341713  325830 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/proxy-client.crt ...
	I1212 20:12:21.341752  325830 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/proxy-client.crt: {Name:mk3638bcc071dc36fe5442929b6d1b03a9020518 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:12:21.341947  325830 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/proxy-client.key ...
	I1212 20:12:21.341966  325830 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/proxy-client.key: {Name:mkb62fd37a3fba0095570e8d5b1f7238ef95bc6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:12:21.342262  325830 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/9254.pem (1338 bytes)
	W1212 20:12:21.342340  325830 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-5703/.minikube/certs/9254_empty.pem, impossibly tiny 0 bytes
	I1212 20:12:21.342357  325830 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 20:12:21.342397  325830 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem (1078 bytes)
	I1212 20:12:21.342437  325830 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:12:21.342474  325830 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/certs/key.pem (1679 bytes)
	I1212 20:12:21.342541  325830 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/ssl/certs/92542.pem (1708 bytes)
	I1212 20:12:21.343500  325830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:12:21.364804  325830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 20:12:21.382538  325830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:12:21.400652  325830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 20:12:21.423060  325830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1212 20:12:21.443361  325830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 20:12:21.462352  325830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:12:21.482470  325830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/calico-789448/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 20:12:21.503516  325830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:12:21.527142  325830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/certs/9254.pem --> /usr/share/ca-certificates/9254.pem (1338 bytes)
	I1212 20:12:21.549942  325830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/ssl/certs/92542.pem --> /usr/share/ca-certificates/92542.pem (1708 bytes)
	I1212 20:12:21.570764  325830 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 20:12:21.586510  325830 ssh_runner.go:195] Run: openssl version
	I1212 20:12:21.593628  325830 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:12:21.602577  325830 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 20:12:21.611379  325830 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:12:21.615896  325830 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:12:21.615941  325830 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:12:21.669150  325830 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 20:12:21.679564  325830 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 20:12:21.688090  325830 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9254.pem
	I1212 20:12:21.695982  325830 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9254.pem /etc/ssl/certs/9254.pem
	I1212 20:12:21.703499  325830 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9254.pem
	I1212 20:12:21.707644  325830 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:38 /usr/share/ca-certificates/9254.pem
	I1212 20:12:21.707694  325830 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9254.pem
	I1212 20:12:21.750453  325830 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 20:12:21.760915  325830 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9254.pem /etc/ssl/certs/51391683.0
	I1212 20:12:21.769555  325830 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/92542.pem
	I1212 20:12:21.777606  325830 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/92542.pem /etc/ssl/certs/92542.pem
	I1212 20:12:21.785204  325830 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92542.pem
	I1212 20:12:21.789301  325830 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:38 /usr/share/ca-certificates/92542.pem
	I1212 20:12:21.789353  325830 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92542.pem
	I1212 20:12:21.841982  325830 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 20:12:21.851919  325830 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/92542.pem /etc/ssl/certs/3ec20f2e.0
	I1212 20:12:21.861800  325830 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:12:21.866040  325830 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 20:12:21.866093  325830 kubeadm.go:401] StartCluster: {Name:calico-789448 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-789448 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:12:21.866189  325830 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:12:21.866247  325830 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:12:21.899656  325830 cri.go:89] found id: ""
	I1212 20:12:21.899721  325830 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 20:12:21.909813  325830 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 20:12:21.918531  325830 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 20:12:21.918579  325830 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:12:21.928050  325830 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 20:12:21.928067  325830 kubeadm.go:158] found existing configuration files:
	
	I1212 20:12:21.928110  325830 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 20:12:21.938015  325830 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 20:12:21.938075  325830 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 20:12:21.946673  325830 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 20:12:21.954763  325830 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 20:12:21.954813  325830 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:12:21.963567  325830 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 20:12:21.973126  325830 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 20:12:21.973173  325830 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:12:21.982328  325830 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 20:12:21.991630  325830 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 20:12:21.991677  325830 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:12:21.999921  325830 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 20:12:22.039340  325830 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1212 20:12:22.039441  325830 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 20:12:22.060783  325830 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 20:12:22.060889  325830 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1212 20:12:22.060948  325830 kubeadm.go:319] OS: Linux
	I1212 20:12:22.061024  325830 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 20:12:22.061118  325830 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 20:12:22.061195  325830 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 20:12:22.061269  325830 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 20:12:22.061370  325830 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 20:12:22.061439  325830 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 20:12:22.061510  325830 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 20:12:22.061571  325830 kubeadm.go:319] CGROUPS_IO: enabled
	I1212 20:12:22.127512  325830 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 20:12:22.127673  325830 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 20:12:22.127797  325830 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 20:12:22.137709  325830 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 20:12:22.139919  325830 out.go:252]   - Generating certificates and keys ...
	I1212 20:12:22.140020  325830 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 20:12:22.140130  325830 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 20:12:22.221827  325830 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 20:12:22.348357  325830 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 20:12:22.378226  325830 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 20:12:22.659720  325830 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 20:12:22.853235  325830 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 20:12:22.853421  325830 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [calico-789448 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1212 20:12:23.109106  325830 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 20:12:23.109334  325830 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [calico-789448 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1212 20:12:23.258493  325830 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 20:12:23.493245  325830 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 20:12:23.543423  325830 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 20:12:23.543583  325830 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	W1212 20:12:21.134248  319249 pod_ready.go:104] pod "coredns-66bc5c9577-zg2v9" is not "Ready", error: <nil>
	W1212 20:12:23.632623  319249 pod_ready.go:104] pod "coredns-66bc5c9577-zg2v9" is not "Ready", error: <nil>
	I1212 20:12:23.657558  325830 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 20:12:24.265466  325830 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 20:12:24.468471  325830 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 20:12:24.717803  325830 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 20:12:24.858348  325830 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 20:12:24.858883  325830 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 20:12:24.862293  325830 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1212 20:12:21.307719  315481 pod_ready.go:104] pod "coredns-66bc5c9577-8wnb6" is not "Ready", error: <nil>
	W1212 20:12:23.795776  315481 pod_ready.go:104] pod "coredns-66bc5c9577-8wnb6" is not "Ready", error: <nil>
	I1212 20:12:24.295462  315481 pod_ready.go:94] pod "coredns-66bc5c9577-8wnb6" is "Ready"
	I1212 20:12:24.295486  315481 pod_ready.go:86] duration metric: took 34.005133721s for pod "coredns-66bc5c9577-8wnb6" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:24.297616  315481 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-433034" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:24.301292  315481 pod_ready.go:94] pod "etcd-default-k8s-diff-port-433034" is "Ready"
	I1212 20:12:24.301311  315481 pod_ready.go:86] duration metric: took 3.673476ms for pod "etcd-default-k8s-diff-port-433034" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:24.303086  315481 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-433034" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:24.306436  315481 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-433034" is "Ready"
	I1212 20:12:24.306453  315481 pod_ready.go:86] duration metric: took 3.349778ms for pod "kube-apiserver-default-k8s-diff-port-433034" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:24.308309  315481 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-433034" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:24.494945  315481 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-433034" is "Ready"
	I1212 20:12:24.494969  315481 pod_ready.go:86] duration metric: took 186.639564ms for pod "kube-controller-manager-default-k8s-diff-port-433034" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:24.694822  315481 pod_ready.go:83] waiting for pod "kube-proxy-tmrrg" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:25.095498  315481 pod_ready.go:94] pod "kube-proxy-tmrrg" is "Ready"
	I1212 20:12:25.095526  315481 pod_ready.go:86] duration metric: took 400.673048ms for pod "kube-proxy-tmrrg" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:25.294095  315481 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-433034" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:25.693984  315481 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-433034" is "Ready"
	I1212 20:12:25.694009  315481 pod_ready.go:86] duration metric: took 399.891348ms for pod "kube-scheduler-default-k8s-diff-port-433034" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:25.694020  315481 pod_ready.go:40] duration metric: took 35.407272239s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 20:12:25.739661  315481 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1212 20:12:25.740989  315481 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-433034" cluster and "default" namespace by default
	I1212 20:12:24.863533  325830 out.go:252]   - Booting up control plane ...
	I1212 20:12:24.863624  325830 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 20:12:24.863733  325830 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 20:12:24.864337  325830 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 20:12:24.877901  325830 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:12:24.878053  325830 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 20:12:24.884143  325830 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 20:12:24.884439  325830 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:12:24.884499  325830 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 20:12:24.981216  325830 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 20:12:24.981388  325830 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 20:12:25.482746  325830 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.574022ms
	I1212 20:12:25.485771  325830 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1212 20:12:25.485912  325830 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1212 20:12:25.486044  325830 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1212 20:12:25.486123  325830 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1212 20:12:26.990739  325830 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.504837982s
	I1212 20:12:27.721889  325830 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.236017201s
	W1212 20:12:26.133069  319249 pod_ready.go:104] pod "coredns-66bc5c9577-zg2v9" is not "Ready", error: <nil>
	W1212 20:12:28.631833  319249 pod_ready.go:104] pod "coredns-66bc5c9577-zg2v9" is not "Ready", error: <nil>
	I1212 20:12:29.487737  325830 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001900496s
	I1212 20:12:29.506030  325830 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 20:12:29.517944  325830 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 20:12:29.527715  325830 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 20:12:29.528023  325830 kubeadm.go:319] [mark-control-plane] Marking the node calico-789448 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 20:12:29.535718  325830 kubeadm.go:319] [bootstrap-token] Using token: kxqoft.qy2o8c8ntm56u2md
	I1212 20:12:29.536990  325830 out.go:252]   - Configuring RBAC rules ...
	I1212 20:12:29.537149  325830 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 20:12:29.540418  325830 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 20:12:29.545302  325830 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 20:12:29.547972  325830 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 20:12:29.551343  325830 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 20:12:29.553931  325830 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 20:12:29.893545  325830 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 20:12:30.307381  325830 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1212 20:12:30.893461  325830 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1212 20:12:30.894248  325830 kubeadm.go:319] 
	I1212 20:12:30.894359  325830 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1212 20:12:30.894381  325830 kubeadm.go:319] 
	I1212 20:12:30.894447  325830 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1212 20:12:30.894454  325830 kubeadm.go:319] 
	I1212 20:12:30.894492  325830 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1212 20:12:30.894547  325830 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 20:12:30.894629  325830 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 20:12:30.894641  325830 kubeadm.go:319] 
	I1212 20:12:30.894722  325830 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1212 20:12:30.894733  325830 kubeadm.go:319] 
	I1212 20:12:30.894802  325830 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 20:12:30.894819  325830 kubeadm.go:319] 
	I1212 20:12:30.894901  325830 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1212 20:12:30.894999  325830 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 20:12:30.895093  325830 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 20:12:30.895102  325830 kubeadm.go:319] 
	I1212 20:12:30.895215  325830 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 20:12:30.895341  325830 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1212 20:12:30.895349  325830 kubeadm.go:319] 
	I1212 20:12:30.895462  325830 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token kxqoft.qy2o8c8ntm56u2md \
	I1212 20:12:30.895611  325830 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c49fb95e06ac971f3e9f3fbce39707c55b5d19bac31f7f0c0750449d5e9f38c \
	I1212 20:12:30.895657  325830 kubeadm.go:319] 	--control-plane 
	I1212 20:12:30.895666  325830 kubeadm.go:319] 
	I1212 20:12:30.895770  325830 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1212 20:12:30.895780  325830 kubeadm.go:319] 
	I1212 20:12:30.895904  325830 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token kxqoft.qy2o8c8ntm56u2md \
	I1212 20:12:30.896045  325830 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2c49fb95e06ac971f3e9f3fbce39707c55b5d19bac31f7f0c0750449d5e9f38c 
	I1212 20:12:30.898247  325830 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1212 20:12:30.898382  325830 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 20:12:30.898412  325830 cni.go:84] Creating CNI manager for "calico"
	I1212 20:12:30.899946  325830 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I1212 20:12:30.901356  325830 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1212 20:12:30.901374  325830 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (329943 bytes)
	I1212 20:12:30.915011  325830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 20:12:31.622556  325830 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 20:12:31.622627  325830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:12:31.622657  325830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-789448 minikube.k8s.io/updated_at=2025_12_12T20_12_31_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300 minikube.k8s.io/name=calico-789448 minikube.k8s.io/primary=true
	I1212 20:12:31.635367  325830 ops.go:34] apiserver oom_adj: -16
	I1212 20:12:31.702030  325830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:12:32.203031  325830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:12:32.702568  325830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:12:33.202976  325830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1212 20:12:30.632320  319249 pod_ready.go:104] pod "coredns-66bc5c9577-zg2v9" is not "Ready", error: <nil>
	W1212 20:12:33.133505  319249 pod_ready.go:104] pod "coredns-66bc5c9577-zg2v9" is not "Ready", error: <nil>
	I1212 20:12:33.702519  325830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:12:34.203097  325830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:12:34.702454  325830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:12:35.202451  325830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:12:35.702744  325830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:12:35.773046  325830 kubeadm.go:1114] duration metric: took 4.150480167s to wait for elevateKubeSystemPrivileges
	I1212 20:12:35.773086  325830 kubeadm.go:403] duration metric: took 13.906995729s to StartCluster
	I1212 20:12:35.773108  325830 settings.go:142] acquiring lock: {Name:mk7b47e673a4fe47e1d03b804f21c4b8c19bdcb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:12:35.773184  325830 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 20:12:35.775200  325830 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/kubeconfig: {Name:mkf945a9079d724e4b1deff5cd122f8b2719dc1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:12:35.775498  325830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 20:12:35.775521  325830 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:12:35.775766  325830 config.go:182] Loaded profile config "calico-789448": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:12:35.775823  325830 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 20:12:35.775905  325830 addons.go:70] Setting storage-provisioner=true in profile "calico-789448"
	I1212 20:12:35.775917  325830 addons.go:70] Setting default-storageclass=true in profile "calico-789448"
	I1212 20:12:35.775924  325830 addons.go:239] Setting addon storage-provisioner=true in "calico-789448"
	I1212 20:12:35.775931  325830 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "calico-789448"
	I1212 20:12:35.775953  325830 host.go:66] Checking if "calico-789448" exists ...
	I1212 20:12:35.776388  325830 cli_runner.go:164] Run: docker container inspect calico-789448 --format={{.State.Status}}
	I1212 20:12:35.776538  325830 cli_runner.go:164] Run: docker container inspect calico-789448 --format={{.State.Status}}
	I1212 20:12:35.777057  325830 out.go:179] * Verifying Kubernetes components...
	I1212 20:12:35.782432  325830 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:12:35.803309  325830 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:12:35.804771  325830 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:12:35.804789  325830 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 20:12:35.804862  325830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-789448
	I1212 20:12:35.805138  325830 addons.go:239] Setting addon default-storageclass=true in "calico-789448"
	I1212 20:12:35.805181  325830 host.go:66] Checking if "calico-789448" exists ...
	I1212 20:12:35.805651  325830 cli_runner.go:164] Run: docker container inspect calico-789448 --format={{.State.Status}}
	I1212 20:12:35.844550  325830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/calico-789448/id_rsa Username:docker}
	I1212 20:12:35.845887  325830 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 20:12:35.845996  325830 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 20:12:35.846048  325830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-789448
	I1212 20:12:35.873390  325830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/calico-789448/id_rsa Username:docker}
	I1212 20:12:35.898903  325830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 20:12:35.943869  325830 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:12:35.971597  325830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:12:35.997202  325830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:12:36.108223  325830 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1212 20:12:36.109213  325830 node_ready.go:35] waiting up to 15m0s for node "calico-789448" to be "Ready" ...
	I1212 20:12:36.333997  325830 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1212 20:12:36.335261  325830 addons.go:530] duration metric: took 559.430472ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1212 20:12:36.612555  325830 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-789448" context rescaled to 1 replicas
	W1212 20:12:38.112969  325830 node_ready.go:57] node "calico-789448" has "Ready":"False" status (will retry)
	W1212 20:12:35.632416  319249 pod_ready.go:104] pod "coredns-66bc5c9577-zg2v9" is not "Ready", error: <nil>
	W1212 20:12:37.634465  319249 pod_ready.go:104] pod "coredns-66bc5c9577-zg2v9" is not "Ready", error: <nil>
	I1212 20:12:38.633317  319249 pod_ready.go:94] pod "coredns-66bc5c9577-zg2v9" is "Ready"
	I1212 20:12:38.633342  319249 pod_ready.go:86] duration metric: took 32.506205692s for pod "coredns-66bc5c9577-zg2v9" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:38.635811  319249 pod_ready.go:83] waiting for pod "etcd-embed-certs-399565" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:38.640519  319249 pod_ready.go:94] pod "etcd-embed-certs-399565" is "Ready"
	I1212 20:12:38.640545  319249 pod_ready.go:86] duration metric: took 4.711813ms for pod "etcd-embed-certs-399565" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:38.642977  319249 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-399565" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:38.647609  319249 pod_ready.go:94] pod "kube-apiserver-embed-certs-399565" is "Ready"
	I1212 20:12:38.647631  319249 pod_ready.go:86] duration metric: took 4.629982ms for pod "kube-apiserver-embed-certs-399565" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:38.650136  319249 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-399565" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:38.831373  319249 pod_ready.go:94] pod "kube-controller-manager-embed-certs-399565" is "Ready"
	I1212 20:12:38.831399  319249 pod_ready.go:86] duration metric: took 181.208829ms for pod "kube-controller-manager-embed-certs-399565" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:39.030500  319249 pod_ready.go:83] waiting for pod "kube-proxy-xgs9b" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:39.430976  319249 pod_ready.go:94] pod "kube-proxy-xgs9b" is "Ready"
	I1212 20:12:39.431007  319249 pod_ready.go:86] duration metric: took 400.47789ms for pod "kube-proxy-xgs9b" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:39.634440  319249 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-399565" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:40.031381  319249 pod_ready.go:94] pod "kube-scheduler-embed-certs-399565" is "Ready"
	I1212 20:12:40.031402  319249 pod_ready.go:86] duration metric: took 396.933676ms for pod "kube-scheduler-embed-certs-399565" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:40.031413  319249 pod_ready.go:40] duration metric: took 33.908932277s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 20:12:40.086169  319249 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1212 20:12:40.087846  319249 out.go:179] * Done! kubectl is now configured to use "embed-certs-399565" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 12 20:12:13 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:13.814335103Z" level=info msg="Started container" PID=1755 containerID=96083d25dfbe7b966a5499a1d5216fca08fbd221d4fba0ff578ef9ce330c7f23 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bjqrc/dashboard-metrics-scraper id=cea523b1-b6ab-4e89-8635-f9bf39edab52 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f80555dcff6d92d82fb765165756d067a8b44ae630f1ab0886e11f1e7fd87d83
	Dec 12 20:12:14 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:14.65035882Z" level=info msg="Removing container: c9f0decded2db11d84c800b48b50c3852609ab48ac1f3277061eded4dce22337" id=2ac522be-6dbb-45cf-b141-4d537123612e name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 20:12:14 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:14.667411747Z" level=info msg="Removed container c9f0decded2db11d84c800b48b50c3852609ab48ac1f3277061eded4dce22337: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bjqrc/dashboard-metrics-scraper" id=2ac522be-6dbb-45cf-b141-4d537123612e name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 20:12:20 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:20.663585528Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d0172bbc-ca1b-488b-aa57-f705dcd9c5a9 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:12:20 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:20.664634777Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9aa32e31-bc05-4de4-8ff1-ba86218288d9 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:12:20 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:20.665726864Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=14f30f74-c081-4c02-af21-77d396ab2f3b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:12:20 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:20.665872311Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:12:20 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:20.670853478Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:12:20 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:20.671058581Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/222ef159560d1fe09db3ac211a07926487a231fbd5132be4ad5d2ce4586675c4/merged/etc/passwd: no such file or directory"
	Dec 12 20:12:20 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:20.671097949Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/222ef159560d1fe09db3ac211a07926487a231fbd5132be4ad5d2ce4586675c4/merged/etc/group: no such file or directory"
	Dec 12 20:12:20 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:20.671472864Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:12:20 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:20.698547714Z" level=info msg="Created container a7a62a905d3ffa40362f751b92853933e880e605a4d8a54419cf3edf75e3bae8: kube-system/storage-provisioner/storage-provisioner" id=14f30f74-c081-4c02-af21-77d396ab2f3b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:12:20 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:20.699081315Z" level=info msg="Starting container: a7a62a905d3ffa40362f751b92853933e880e605a4d8a54419cf3edf75e3bae8" id=42981bcd-47e3-43cc-a727-32bb2d677f29 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 20:12:20 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:20.700982154Z" level=info msg="Started container" PID=1771 containerID=a7a62a905d3ffa40362f751b92853933e880e605a4d8a54419cf3edf75e3bae8 description=kube-system/storage-provisioner/storage-provisioner id=42981bcd-47e3-43cc-a727-32bb2d677f29 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7a3f642a50e5643f52b21f16623af2aacd868abf8b1f927b3c6c898510219dd3
	Dec 12 20:12:36 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:36.53833873Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=94957723-489f-4622-9399-498e8adc7dbd name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:12:36 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:36.539076636Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=6290cb5b-8661-4070-b20c-cbc0d65aac62 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:12:36 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:36.540057747Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bjqrc/dashboard-metrics-scraper" id=297a13b2-49c6-48aa-9898-3fce1c0efe07 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:12:36 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:36.540321236Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:12:36 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:36.546695188Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:12:36 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:36.547186197Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:12:36 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:36.570300776Z" level=info msg="Created container 4da0adad794baa248ec8fa2c45da542a6e93bc48468a007f657cbc26a9af53e9: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bjqrc/dashboard-metrics-scraper" id=297a13b2-49c6-48aa-9898-3fce1c0efe07 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:12:36 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:36.570993585Z" level=info msg="Starting container: 4da0adad794baa248ec8fa2c45da542a6e93bc48468a007f657cbc26a9af53e9" id=9e71737d-48bc-428e-8bd3-63aa70c89f2a name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 20:12:36 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:36.573171486Z" level=info msg="Started container" PID=1804 containerID=4da0adad794baa248ec8fa2c45da542a6e93bc48468a007f657cbc26a9af53e9 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bjqrc/dashboard-metrics-scraper id=9e71737d-48bc-428e-8bd3-63aa70c89f2a name=/runtime.v1.RuntimeService/StartContainer sandboxID=f80555dcff6d92d82fb765165756d067a8b44ae630f1ab0886e11f1e7fd87d83
	Dec 12 20:12:36 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:36.708920094Z" level=info msg="Removing container: 96083d25dfbe7b966a5499a1d5216fca08fbd221d4fba0ff578ef9ce330c7f23" id=2568bafb-9d8b-499c-a712-36e518fea7f9 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 20:12:36 default-k8s-diff-port-433034 crio[565]: time="2025-12-12T20:12:36.721962892Z" level=info msg="Removed container 96083d25dfbe7b966a5499a1d5216fca08fbd221d4fba0ff578ef9ce330c7f23: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bjqrc/dashboard-metrics-scraper" id=2568bafb-9d8b-499c-a712-36e518fea7f9 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	4da0adad794ba       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           6 seconds ago       Exited              dashboard-metrics-scraper   3                   f80555dcff6d9       dashboard-metrics-scraper-6ffb444bf9-bjqrc             kubernetes-dashboard
	a7a62a905d3ff       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   7a3f642a50e56       storage-provisioner                                    kube-system
	57f988954b100       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   42 seconds ago      Running             kubernetes-dashboard        0                   968c430996154       kubernetes-dashboard-855c9754f9-nc8xd                  kubernetes-dashboard
	8219c3982e2a0       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           52 seconds ago      Running             coredns                     0                   4bdf4741e7ccb       coredns-66bc5c9577-8wnb6                               kube-system
	88edfb91cf4a0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   7a3f642a50e56       storage-provisioner                                    kube-system
	07e847a34e485       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   06fa0538bbaf8       busybox                                                default
	004cb4da4fc4a       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           52 seconds ago      Running             kube-proxy                  0                   3e75fb5970139       kube-proxy-tmrrg                                       kube-system
	8fc7fbe67821e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   42610116f316c       kindnet-w6vcl                                          kube-system
	ebeb10d45d10d       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           55 seconds ago      Running             kube-scheduler              0                   52d3944b9e66e       kube-scheduler-default-k8s-diff-port-433034            kube-system
	db085ca1f08eb       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           55 seconds ago      Running             kube-controller-manager     0                   d602cdf96a132       kube-controller-manager-default-k8s-diff-port-433034   kube-system
	6edfed35b96f2       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           55 seconds ago      Running             etcd                        0                   2ab6116eb294f       etcd-default-k8s-diff-port-433034                      kube-system
	261b4a83ad82d       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           55 seconds ago      Running             kube-apiserver              0                   9f98da21fa6f4       kube-apiserver-default-k8s-diff-port-433034            kube-system
	
	
	==> coredns [8219c3982e2a00c14a01654ae80b4054af8d527ae5e2473d70b4f644e062d30c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38069 - 27166 "HINFO IN 8740276210099719302.8441911480510563431. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.07583546s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-433034
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-433034
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300
	                    minikube.k8s.io/name=default-k8s-diff-port-433034
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T20_10_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 20:10:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-433034
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 20:12:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 20:12:29 +0000   Fri, 12 Dec 2025 20:10:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 20:12:29 +0000   Fri, 12 Dec 2025 20:10:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 20:12:29 +0000   Fri, 12 Dec 2025 20:10:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 20:12:29 +0000   Fri, 12 Dec 2025 20:11:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-433034
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c9831a2813dcaeb3cc17b596693b7bac
	  System UUID:                50f00333-6091-4f07-9dbc-f9936dd93205
	  Boot ID:                    50e046d9-33b0-4107-918f-f0a8d9513b10
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-66bc5c9577-8wnb6                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     104s
	  kube-system                 etcd-default-k8s-diff-port-433034                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         110s
	  kube-system                 kindnet-w6vcl                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-default-k8s-diff-port-433034             250m (3%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-433034    200m (2%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-proxy-tmrrg                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-default-k8s-diff-port-433034             100m (1%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-bjqrc              0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-nc8xd                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 103s               kube-proxy       
	  Normal  Starting                 52s                kube-proxy       
	  Normal  NodeHasSufficientMemory  110s               kubelet          Node default-k8s-diff-port-433034 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    110s               kubelet          Node default-k8s-diff-port-433034 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     110s               kubelet          Node default-k8s-diff-port-433034 status is now: NodeHasSufficientPID
	  Normal  Starting                 110s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           105s               node-controller  Node default-k8s-diff-port-433034 event: Registered Node default-k8s-diff-port-433034 in Controller
	  Normal  NodeReady                93s                kubelet          Node default-k8s-diff-port-433034 status is now: NodeReady
	  Normal  Starting                 56s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x8 over 56s)  kubelet          Node default-k8s-diff-port-433034 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 56s)  kubelet          Node default-k8s-diff-port-433034 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 56s)  kubelet          Node default-k8s-diff-port-433034 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           50s                node-controller  Node default-k8s-diff-port-433034 event: Registered Node default-k8s-diff-port-433034 in Controller
	
	
	==> dmesg <==
	[  +0.086327] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026088] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.855140] kauditd_printk_skb: 47 callbacks suppressed
	[Dec12 19:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.016395] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023890] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023861] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023912] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023894] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +2.047754] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +4.031589] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +8.448131] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[Dec12 19:32] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[ +32.252714] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	
	
	==> etcd [6edfed35b96f2b2cbb9c54cdfbf440c89b72c03fc6a8947569d87276098e3d6e] <==
	{"level":"warn","ts":"2025-12-12T20:11:48.041532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.048972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.057341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.063700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.071226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.078320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.088735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.096059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.103897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.113502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.120306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.128683Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.135065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.142052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.149507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.156495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.163766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.169920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.176710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.185666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.198246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.211094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.217488Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.225212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:48.274632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42970","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:12:42 up 55 min,  0 user,  load average: 4.99, 3.15, 1.98
	Linux default-k8s-diff-port-433034 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8fc7fbe67821e88822d5c7655631e923b3b25aec05a2aec07ab906239a66992a] <==
	I1212 20:11:50.102960       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1212 20:11:50.103238       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1212 20:11:50.103442       1 main.go:148] setting mtu 1500 for CNI 
	I1212 20:11:50.103463       1 main.go:178] kindnetd IP family: "ipv4"
	I1212 20:11:50.103489       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-12T20:11:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1212 20:11:50.400122       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1212 20:11:50.400157       1 controller.go:381] "Waiting for informer caches to sync"
	I1212 20:11:50.400169       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1212 20:11:50.499305       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1212 20:11:50.790495       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1212 20:11:50.790540       1 metrics.go:72] Registering metrics
	I1212 20:11:50.790614       1 controller.go:711] "Syncing nftables rules"
	I1212 20:12:00.308405       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1212 20:12:00.308452       1 main.go:301] handling current node
	I1212 20:12:10.308927       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1212 20:12:10.308960       1 main.go:301] handling current node
	I1212 20:12:20.308456       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1212 20:12:20.308492       1 main.go:301] handling current node
	I1212 20:12:30.308426       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1212 20:12:30.308466       1 main.go:301] handling current node
	I1212 20:12:40.308007       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1212 20:12:40.308075       1 main.go:301] handling current node
	
	
	==> kube-apiserver [261b4a83ad82d0b63e1a0022703c411f8ddd6b03f5cbf86192b1fbce85653f93] <==
	I1212 20:11:48.758706       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 20:11:48.758721       1 cache.go:39] Caches are synced for autoregister controller
	I1212 20:11:48.758428       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1212 20:11:48.758925       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1212 20:11:48.758447       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1212 20:11:48.758460       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1212 20:11:48.758522       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1212 20:11:48.759324       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	E1212 20:11:48.762912       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1212 20:11:48.767199       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1212 20:11:48.795150       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1212 20:11:48.795248       1 policy_source.go:240] refreshing policies
	I1212 20:11:48.869450       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 20:11:49.053459       1 controller.go:667] quota admission added evaluator for: namespaces
	I1212 20:11:49.081212       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1212 20:11:49.097558       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 20:11:49.104586       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 20:11:49.110189       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1212 20:11:49.138387       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.173.145"}
	I1212 20:11:49.146340       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.128.98"}
	I1212 20:11:49.657540       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 20:11:52.353089       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 20:11:52.353136       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 20:11:52.552558       1 controller.go:667] quota admission added evaluator for: endpoints
	I1212 20:11:52.604564       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [db085ca1f08ebad1a72de68de42b83fd3c82a1ed0f265e1e74983cd5d88ae7f5] <==
	I1212 20:11:52.132994       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1212 20:11:52.149536       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1212 20:11:52.150738       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1212 20:11:52.150769       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1212 20:11:52.150802       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1212 20:11:52.150829       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1212 20:11:52.150821       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1212 20:11:52.150838       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1212 20:11:52.151362       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1212 20:11:52.152166       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1212 20:11:52.152186       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1212 20:11:52.152411       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1212 20:11:52.153597       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1212 20:11:52.153627       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1212 20:11:52.153671       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1212 20:11:52.153712       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1212 20:11:52.153726       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1212 20:11:52.153733       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1212 20:11:52.154941       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1212 20:11:52.154963       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 20:11:52.155016       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1212 20:11:52.161061       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1212 20:11:52.161063       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1212 20:11:52.161065       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1212 20:11:52.168502       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [004cb4da4fc4a28cc850784ef818eb4543cdf0dedee9670ac227fada50f58160] <==
	I1212 20:11:49.958779       1 server_linux.go:53] "Using iptables proxy"
	I1212 20:11:50.014234       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1212 20:11:50.114902       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1212 20:11:50.114949       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1212 20:11:50.115039       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 20:11:50.132926       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 20:11:50.132971       1 server_linux.go:132] "Using iptables Proxier"
	I1212 20:11:50.138877       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 20:11:50.139250       1 server.go:527] "Version info" version="v1.34.2"
	I1212 20:11:50.139288       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 20:11:50.140335       1 config.go:200] "Starting service config controller"
	I1212 20:11:50.140361       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 20:11:50.140445       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 20:11:50.140460       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 20:11:50.140486       1 config.go:106] "Starting endpoint slice config controller"
	I1212 20:11:50.140492       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 20:11:50.140516       1 config.go:309] "Starting node config controller"
	I1212 20:11:50.140544       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 20:11:50.140551       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 20:11:50.241085       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1212 20:11:50.241129       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1212 20:11:50.241108       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [ebeb10d45d10d2c655391f363492fcf212271217062b328f88a67404cc971388] <==
	I1212 20:11:47.960462       1 serving.go:386] Generated self-signed cert in-memory
	W1212 20:11:48.664442       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1212 20:11:48.664472       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 20:11:48.664484       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1212 20:11:48.664493       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1212 20:11:48.712743       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1212 20:11:48.715434       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 20:11:48.719653       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 20:11:48.719745       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 20:11:48.723758       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1212 20:11:48.723852       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1212 20:11:48.820140       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 12 20:11:56 default-k8s-diff-port-433034 kubelet[731]: I1212 20:11:56.589943     731 scope.go:117] "RemoveContainer" containerID="9581820bcbf52cb7fb5f6f7baadca377d7633f7b588f36eb5a56cd1ac7fba044"
	Dec 12 20:11:57 default-k8s-diff-port-433034 kubelet[731]: I1212 20:11:57.594857     731 scope.go:117] "RemoveContainer" containerID="9581820bcbf52cb7fb5f6f7baadca377d7633f7b588f36eb5a56cd1ac7fba044"
	Dec 12 20:11:57 default-k8s-diff-port-433034 kubelet[731]: I1212 20:11:57.595004     731 scope.go:117] "RemoveContainer" containerID="c9f0decded2db11d84c800b48b50c3852609ab48ac1f3277061eded4dce22337"
	Dec 12 20:11:57 default-k8s-diff-port-433034 kubelet[731]: E1212 20:11:57.595207     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bjqrc_kubernetes-dashboard(288d279c-693f-48d7-9c25-89ab0643312f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bjqrc" podUID="288d279c-693f-48d7-9c25-89ab0643312f"
	Dec 12 20:11:58 default-k8s-diff-port-433034 kubelet[731]: I1212 20:11:58.599231     731 scope.go:117] "RemoveContainer" containerID="c9f0decded2db11d84c800b48b50c3852609ab48ac1f3277061eded4dce22337"
	Dec 12 20:11:58 default-k8s-diff-port-433034 kubelet[731]: E1212 20:11:58.599809     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bjqrc_kubernetes-dashboard(288d279c-693f-48d7-9c25-89ab0643312f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bjqrc" podUID="288d279c-693f-48d7-9c25-89ab0643312f"
	Dec 12 20:12:00 default-k8s-diff-port-433034 kubelet[731]: I1212 20:12:00.621246     731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-nc8xd" podStartSLOduration=1.571442621 podStartE2EDuration="8.621222259s" podCreationTimestamp="2025-12-12 20:11:52 +0000 UTC" firstStartedPulling="2025-12-12 20:11:53.050683974 +0000 UTC m=+6.603201498" lastFinishedPulling="2025-12-12 20:12:00.100463622 +0000 UTC m=+13.652981136" observedRunningTime="2025-12-12 20:12:00.620944621 +0000 UTC m=+14.173462154" watchObservedRunningTime="2025-12-12 20:12:00.621222259 +0000 UTC m=+14.173739791"
	Dec 12 20:12:03 default-k8s-diff-port-433034 kubelet[731]: I1212 20:12:03.234161     731 scope.go:117] "RemoveContainer" containerID="c9f0decded2db11d84c800b48b50c3852609ab48ac1f3277061eded4dce22337"
	Dec 12 20:12:03 default-k8s-diff-port-433034 kubelet[731]: E1212 20:12:03.234427     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bjqrc_kubernetes-dashboard(288d279c-693f-48d7-9c25-89ab0643312f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bjqrc" podUID="288d279c-693f-48d7-9c25-89ab0643312f"
	Dec 12 20:12:13 default-k8s-diff-port-433034 kubelet[731]: I1212 20:12:13.537903     731 scope.go:117] "RemoveContainer" containerID="c9f0decded2db11d84c800b48b50c3852609ab48ac1f3277061eded4dce22337"
	Dec 12 20:12:14 default-k8s-diff-port-433034 kubelet[731]: I1212 20:12:14.647439     731 scope.go:117] "RemoveContainer" containerID="c9f0decded2db11d84c800b48b50c3852609ab48ac1f3277061eded4dce22337"
	Dec 12 20:12:14 default-k8s-diff-port-433034 kubelet[731]: I1212 20:12:14.648191     731 scope.go:117] "RemoveContainer" containerID="96083d25dfbe7b966a5499a1d5216fca08fbd221d4fba0ff578ef9ce330c7f23"
	Dec 12 20:12:14 default-k8s-diff-port-433034 kubelet[731]: E1212 20:12:14.648446     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bjqrc_kubernetes-dashboard(288d279c-693f-48d7-9c25-89ab0643312f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bjqrc" podUID="288d279c-693f-48d7-9c25-89ab0643312f"
	Dec 12 20:12:20 default-k8s-diff-port-433034 kubelet[731]: I1212 20:12:20.663155     731 scope.go:117] "RemoveContainer" containerID="88edfb91cf4a038250228f682d3173e413b779bc18321abc13169b2fa6574901"
	Dec 12 20:12:23 default-k8s-diff-port-433034 kubelet[731]: I1212 20:12:23.233695     731 scope.go:117] "RemoveContainer" containerID="96083d25dfbe7b966a5499a1d5216fca08fbd221d4fba0ff578ef9ce330c7f23"
	Dec 12 20:12:23 default-k8s-diff-port-433034 kubelet[731]: E1212 20:12:23.233928     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bjqrc_kubernetes-dashboard(288d279c-693f-48d7-9c25-89ab0643312f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bjqrc" podUID="288d279c-693f-48d7-9c25-89ab0643312f"
	Dec 12 20:12:36 default-k8s-diff-port-433034 kubelet[731]: I1212 20:12:36.537689     731 scope.go:117] "RemoveContainer" containerID="96083d25dfbe7b966a5499a1d5216fca08fbd221d4fba0ff578ef9ce330c7f23"
	Dec 12 20:12:36 default-k8s-diff-port-433034 kubelet[731]: I1212 20:12:36.707482     731 scope.go:117] "RemoveContainer" containerID="96083d25dfbe7b966a5499a1d5216fca08fbd221d4fba0ff578ef9ce330c7f23"
	Dec 12 20:12:36 default-k8s-diff-port-433034 kubelet[731]: I1212 20:12:36.707689     731 scope.go:117] "RemoveContainer" containerID="4da0adad794baa248ec8fa2c45da542a6e93bc48468a007f657cbc26a9af53e9"
	Dec 12 20:12:36 default-k8s-diff-port-433034 kubelet[731]: E1212 20:12:36.707890     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bjqrc_kubernetes-dashboard(288d279c-693f-48d7-9c25-89ab0643312f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bjqrc" podUID="288d279c-693f-48d7-9c25-89ab0643312f"
	Dec 12 20:12:37 default-k8s-diff-port-433034 kubelet[731]: I1212 20:12:37.964559     731 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 12 20:12:37 default-k8s-diff-port-433034 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 12 20:12:37 default-k8s-diff-port-433034 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 12 20:12:37 default-k8s-diff-port-433034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:12:37 default-k8s-diff-port-433034 systemd[1]: kubelet.service: Consumed 1.644s CPU time.
	
	
	==> kubernetes-dashboard [57f988954b100c48adbf59a94719d00d2d865dfab8b794ee332c80fa4b999f24] <==
	2025/12/12 20:12:00 Using namespace: kubernetes-dashboard
	2025/12/12 20:12:00 Using in-cluster config to connect to apiserver
	2025/12/12 20:12:00 Using secret token for csrf signing
	2025/12/12 20:12:00 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/12 20:12:00 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/12 20:12:00 Successful initial request to the apiserver, version: v1.34.2
	2025/12/12 20:12:00 Generating JWE encryption key
	2025/12/12 20:12:00 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/12 20:12:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/12 20:12:00 Initializing JWE encryption key from synchronized object
	2025/12/12 20:12:00 Creating in-cluster Sidecar client
	2025/12/12 20:12:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/12 20:12:00 Serving insecurely on HTTP port: 9090
	2025/12/12 20:12:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/12 20:12:00 Starting overwatch
	
	
	==> storage-provisioner [88edfb91cf4a038250228f682d3173e413b779bc18321abc13169b2fa6574901] <==
	I1212 20:11:49.940609       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1212 20:12:19.943909       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a7a62a905d3ffa40362f751b92853933e880e605a4d8a54419cf3edf75e3bae8] <==
	I1212 20:12:20.713644       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 20:12:20.728771       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 20:12:20.728829       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1212 20:12:20.730858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:24.185168       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:28.445466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:32.044319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:35.097876       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:38.120757       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:38.126017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1212 20:12:38.126359       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 20:12:38.126549       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-433034_32f44bf4-c466-406e-ad46-99aa7830d33f!
	I1212 20:12:38.126579       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"993049af-7bb6-48bb-a2c2-ac2e2f6fa3e3", APIVersion:"v1", ResourceVersion:"640", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-433034_32f44bf4-c466-406e-ad46-99aa7830d33f became leader
	W1212 20:12:38.133677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:38.140731       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1212 20:12:38.226836       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-433034_32f44bf4-c466-406e-ad46-99aa7830d33f!
	W1212 20:12:40.145036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:40.152298       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:42.156637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:42.162214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-433034 -n default-k8s-diff-port-433034
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-433034 -n default-k8s-diff-port-433034: exit status 2 (383.377496ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-433034 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (7.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-399565 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-399565 --alsologtostderr -v=1: exit status 80 (2.48867915s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-399565 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:12:52.085528  337177 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:12:52.085779  337177 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:12:52.085788  337177 out.go:374] Setting ErrFile to fd 2...
	I1212 20:12:52.085793  337177 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:12:52.085993  337177 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 20:12:52.086228  337177 out.go:368] Setting JSON to false
	I1212 20:12:52.086246  337177 mustload.go:66] Loading cluster: embed-certs-399565
	I1212 20:12:52.086602  337177 config.go:182] Loaded profile config "embed-certs-399565": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:12:52.086944  337177 cli_runner.go:164] Run: docker container inspect embed-certs-399565 --format={{.State.Status}}
	I1212 20:12:52.106390  337177 host.go:66] Checking if "embed-certs-399565" exists ...
	I1212 20:12:52.106711  337177 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:12:52.167507  337177 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:false NGoroutines:71 SystemTime:2025-12-12 20:12:52.155357835 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 20:12:52.168339  337177 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22112/minikube-v1.37.0-1765505725-22112-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765505725-22112/minikube-v1.37.0-1765505725-22112-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765505725-22112-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-399565 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1212 20:12:52.235897  337177 out.go:179] * Pausing node embed-certs-399565 ... 
	I1212 20:12:52.251373  337177 host.go:66] Checking if "embed-certs-399565" exists ...
	I1212 20:12:52.251735  337177 ssh_runner.go:195] Run: systemctl --version
	I1212 20:12:52.251792  337177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-399565
	I1212 20:12:52.272043  337177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/embed-certs-399565/id_rsa Username:docker}
	I1212 20:12:52.366661  337177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:12:52.378296  337177 pause.go:52] kubelet running: true
	I1212 20:12:52.378342  337177 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 20:12:52.584072  337177 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 20:12:52.584192  337177 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 20:12:52.665852  337177 cri.go:89] found id: "e860f29f72e6009ea5335c8f0c2d97f95e18316d247d5e91ebfd2df64073c48c"
	I1212 20:12:52.665880  337177 cri.go:89] found id: "44c3d08146af4a11949a7d4c4e0983875afd2b7ddd7a3408190d4ac8748b9d41"
	I1212 20:12:52.665887  337177 cri.go:89] found id: "9392cc35dd05aa59c29ff54e9dcf13b8a7dbd9d5eb3b57f5998b857dc3679304"
	I1212 20:12:52.665898  337177 cri.go:89] found id: "db7cce6e798bcb16ec89d7b8cb54237dc498e9c99560888d6461c7b2f3a028aa"
	I1212 20:12:52.665902  337177 cri.go:89] found id: "e41bd589f9d6e6b003b2c73ebdd9a095cb6e17b960d1b4da2b23c408cf0cb8ab"
	I1212 20:12:52.665909  337177 cri.go:89] found id: "33c3adae59f67985263e48c4dcbeb792ce1fb117cf8d4ff5efb24caa08cbb03d"
	I1212 20:12:52.665914  337177 cri.go:89] found id: "71b86d1be5120eb4253f8e9ab45b12d91a5b1989d2e35061f4250da25598d54b"
	I1212 20:12:52.665919  337177 cri.go:89] found id: "cb3e0992ee8404f1e2603f20c229f9311a1e0d6209d65cd7c650f29bc80627f2"
	I1212 20:12:52.665924  337177 cri.go:89] found id: "19e9c49dfeed28b153429669e7b559f50cc7919c3030acc9d2ad133418bc6615"
	I1212 20:12:52.665932  337177 cri.go:89] found id: "a02bc8eea52d1f29eb5e3c408bb339f74803db1963fb930969f4a47fe8df68b7"
	I1212 20:12:52.665937  337177 cri.go:89] found id: "cc941461156ed0cf714b01a2f22277d720bedfc937d891d687f0e2e22e6b697a"
	I1212 20:12:52.665943  337177 cri.go:89] found id: ""
	I1212 20:12:52.665993  337177 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 20:12:52.683691  337177 retry.go:31] will retry after 174.336058ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:12:52Z" level=error msg="open /run/runc: no such file or directory"
	I1212 20:12:52.858310  337177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:12:52.877044  337177 pause.go:52] kubelet running: false
	I1212 20:12:52.877119  337177 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 20:12:53.064953  337177 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 20:12:53.065044  337177 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 20:12:53.162602  337177 cri.go:89] found id: "e860f29f72e6009ea5335c8f0c2d97f95e18316d247d5e91ebfd2df64073c48c"
	I1212 20:12:53.162627  337177 cri.go:89] found id: "44c3d08146af4a11949a7d4c4e0983875afd2b7ddd7a3408190d4ac8748b9d41"
	I1212 20:12:53.162634  337177 cri.go:89] found id: "9392cc35dd05aa59c29ff54e9dcf13b8a7dbd9d5eb3b57f5998b857dc3679304"
	I1212 20:12:53.162639  337177 cri.go:89] found id: "db7cce6e798bcb16ec89d7b8cb54237dc498e9c99560888d6461c7b2f3a028aa"
	I1212 20:12:53.162644  337177 cri.go:89] found id: "e41bd589f9d6e6b003b2c73ebdd9a095cb6e17b960d1b4da2b23c408cf0cb8ab"
	I1212 20:12:53.162650  337177 cri.go:89] found id: "33c3adae59f67985263e48c4dcbeb792ce1fb117cf8d4ff5efb24caa08cbb03d"
	I1212 20:12:53.162654  337177 cri.go:89] found id: "71b86d1be5120eb4253f8e9ab45b12d91a5b1989d2e35061f4250da25598d54b"
	I1212 20:12:53.162659  337177 cri.go:89] found id: "cb3e0992ee8404f1e2603f20c229f9311a1e0d6209d65cd7c650f29bc80627f2"
	I1212 20:12:53.162663  337177 cri.go:89] found id: "19e9c49dfeed28b153429669e7b559f50cc7919c3030acc9d2ad133418bc6615"
	I1212 20:12:53.162679  337177 cri.go:89] found id: "a02bc8eea52d1f29eb5e3c408bb339f74803db1963fb930969f4a47fe8df68b7"
	I1212 20:12:53.162690  337177 cri.go:89] found id: "cc941461156ed0cf714b01a2f22277d720bedfc937d891d687f0e2e22e6b697a"
	I1212 20:12:53.162694  337177 cri.go:89] found id: ""
	I1212 20:12:53.162737  337177 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 20:12:53.177686  337177 retry.go:31] will retry after 213.336438ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:12:53Z" level=error msg="open /run/runc: no such file or directory"
	I1212 20:12:53.392121  337177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:12:53.404848  337177 pause.go:52] kubelet running: false
	I1212 20:12:53.404910  337177 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 20:12:53.557713  337177 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 20:12:53.557796  337177 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 20:12:53.629606  337177 cri.go:89] found id: "e860f29f72e6009ea5335c8f0c2d97f95e18316d247d5e91ebfd2df64073c48c"
	I1212 20:12:53.629636  337177 cri.go:89] found id: "44c3d08146af4a11949a7d4c4e0983875afd2b7ddd7a3408190d4ac8748b9d41"
	I1212 20:12:53.629643  337177 cri.go:89] found id: "9392cc35dd05aa59c29ff54e9dcf13b8a7dbd9d5eb3b57f5998b857dc3679304"
	I1212 20:12:53.629649  337177 cri.go:89] found id: "db7cce6e798bcb16ec89d7b8cb54237dc498e9c99560888d6461c7b2f3a028aa"
	I1212 20:12:53.629654  337177 cri.go:89] found id: "e41bd589f9d6e6b003b2c73ebdd9a095cb6e17b960d1b4da2b23c408cf0cb8ab"
	I1212 20:12:53.629660  337177 cri.go:89] found id: "33c3adae59f67985263e48c4dcbeb792ce1fb117cf8d4ff5efb24caa08cbb03d"
	I1212 20:12:53.629665  337177 cri.go:89] found id: "71b86d1be5120eb4253f8e9ab45b12d91a5b1989d2e35061f4250da25598d54b"
	I1212 20:12:53.629671  337177 cri.go:89] found id: "cb3e0992ee8404f1e2603f20c229f9311a1e0d6209d65cd7c650f29bc80627f2"
	I1212 20:12:53.629675  337177 cri.go:89] found id: "19e9c49dfeed28b153429669e7b559f50cc7919c3030acc9d2ad133418bc6615"
	I1212 20:12:53.629692  337177 cri.go:89] found id: "a02bc8eea52d1f29eb5e3c408bb339f74803db1963fb930969f4a47fe8df68b7"
	I1212 20:12:53.629701  337177 cri.go:89] found id: "cc941461156ed0cf714b01a2f22277d720bedfc937d891d687f0e2e22e6b697a"
	I1212 20:12:53.629706  337177 cri.go:89] found id: ""
	I1212 20:12:53.629753  337177 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 20:12:53.642247  337177 retry.go:31] will retry after 381.502123ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:12:53Z" level=error msg="open /run/runc: no such file or directory"
	I1212 20:12:54.024806  337177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:12:54.038662  337177 pause.go:52] kubelet running: false
	I1212 20:12:54.038725  337177 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 20:12:54.202184  337177 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 20:12:54.202269  337177 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 20:12:54.273031  337177 cri.go:89] found id: "e860f29f72e6009ea5335c8f0c2d97f95e18316d247d5e91ebfd2df64073c48c"
	I1212 20:12:54.273064  337177 cri.go:89] found id: "44c3d08146af4a11949a7d4c4e0983875afd2b7ddd7a3408190d4ac8748b9d41"
	I1212 20:12:54.273076  337177 cri.go:89] found id: "9392cc35dd05aa59c29ff54e9dcf13b8a7dbd9d5eb3b57f5998b857dc3679304"
	I1212 20:12:54.273081  337177 cri.go:89] found id: "db7cce6e798bcb16ec89d7b8cb54237dc498e9c99560888d6461c7b2f3a028aa"
	I1212 20:12:54.273086  337177 cri.go:89] found id: "e41bd589f9d6e6b003b2c73ebdd9a095cb6e17b960d1b4da2b23c408cf0cb8ab"
	I1212 20:12:54.273090  337177 cri.go:89] found id: "33c3adae59f67985263e48c4dcbeb792ce1fb117cf8d4ff5efb24caa08cbb03d"
	I1212 20:12:54.273095  337177 cri.go:89] found id: "71b86d1be5120eb4253f8e9ab45b12d91a5b1989d2e35061f4250da25598d54b"
	I1212 20:12:54.273099  337177 cri.go:89] found id: "cb3e0992ee8404f1e2603f20c229f9311a1e0d6209d65cd7c650f29bc80627f2"
	I1212 20:12:54.273103  337177 cri.go:89] found id: "19e9c49dfeed28b153429669e7b559f50cc7919c3030acc9d2ad133418bc6615"
	I1212 20:12:54.273112  337177 cri.go:89] found id: "a02bc8eea52d1f29eb5e3c408bb339f74803db1963fb930969f4a47fe8df68b7"
	I1212 20:12:54.273118  337177 cri.go:89] found id: "cc941461156ed0cf714b01a2f22277d720bedfc937d891d687f0e2e22e6b697a"
	I1212 20:12:54.273122  337177 cri.go:89] found id: ""
	I1212 20:12:54.273183  337177 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 20:12:54.337517  337177 out.go:203] 
	W1212 20:12:54.398847  337177 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:12:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:12:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 20:12:54.398894  337177 out.go:285] * 
	* 
	W1212 20:12:54.403835  337177 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:12:54.425004  337177 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-399565 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-399565
helpers_test.go:244: (dbg) docker inspect embed-certs-399565:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "71e8830a236d9369e8cea7538408472309deacfa903143b0db3a316298f76bf1",
	        "Created": "2025-12-12T20:10:48.358308511Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 319632,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T20:11:54.738621896Z",
	            "FinishedAt": "2025-12-12T20:11:53.535155057Z"
	        },
	        "Image": "sha256:1ca69fb46d667873e297ff8975852d6be818eb624529e750e2b09ff0d3b0c367",
	        "ResolvConfPath": "/var/lib/docker/containers/71e8830a236d9369e8cea7538408472309deacfa903143b0db3a316298f76bf1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/71e8830a236d9369e8cea7538408472309deacfa903143b0db3a316298f76bf1/hostname",
	        "HostsPath": "/var/lib/docker/containers/71e8830a236d9369e8cea7538408472309deacfa903143b0db3a316298f76bf1/hosts",
	        "LogPath": "/var/lib/docker/containers/71e8830a236d9369e8cea7538408472309deacfa903143b0db3a316298f76bf1/71e8830a236d9369e8cea7538408472309deacfa903143b0db3a316298f76bf1-json.log",
	        "Name": "/embed-certs-399565",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-399565:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-399565",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "71e8830a236d9369e8cea7538408472309deacfa903143b0db3a316298f76bf1",
	                "LowerDir": "/var/lib/docker/overlay2/79b7657912b8e71e536eec636256b7f5706f9f6d36ba804943f0289661937da2-init/diff:/var/lib/docker/overlay2/87d8f1d6be832394c20ec55eb084b3f4a084ca786856584bd9e90cdb45432d3c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/79b7657912b8e71e536eec636256b7f5706f9f6d36ba804943f0289661937da2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/79b7657912b8e71e536eec636256b7f5706f9f6d36ba804943f0289661937da2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/79b7657912b8e71e536eec636256b7f5706f9f6d36ba804943f0289661937da2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-399565",
	                "Source": "/var/lib/docker/volumes/embed-certs-399565/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-399565",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-399565",
	                "name.minikube.sigs.k8s.io": "embed-certs-399565",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "1e877d741e865d60de3124fe8e3f1eda4c0a5f0a974eb066d270e2debe7c5f4d",
	            "SandboxKey": "/var/run/docker/netns/1e877d741e86",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-399565": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6c29c7e79781ac9639d4796d21d5075ddac5af9af8ecc99427d5e7f6d18273d7",
	                    "EndpointID": "aaee8a269cd5d72b4bb9707d59ae8c72a6e985a10bc8c08d588ddf26b7963dc4",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "92:25:52:65:d2:5a",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-399565",
	                        "71e8830a236d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-399565 -n embed-certs-399565
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-399565 -n embed-certs-399565: exit status 2 (374.715246ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-399565 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-399565 logs -n 25: (1.783433552s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kindnet-789448 sudo systemctl status docker --all --full --no-pager                                                                                             │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │                     │
	│ ssh     │ -p kindnet-789448 sudo systemctl cat docker --no-pager                                                                                                             │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p kindnet-789448 sudo cat /etc/docker/daemon.json                                                                                                                 │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │                     │
	│ ssh     │ -p kindnet-789448 sudo docker system info                                                                                                                          │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │                     │
	│ ssh     │ -p kindnet-789448 sudo systemctl status cri-docker --all --full --no-pager                                                                                         │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │                     │
	│ ssh     │ -p kindnet-789448 sudo systemctl cat cri-docker --no-pager                                                                                                         │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p kindnet-789448 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                    │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │                     │
	│ ssh     │ -p kindnet-789448 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                              │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ delete  │ -p default-k8s-diff-port-433034                                                                                                                                    │ default-k8s-diff-port-433034 │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p kindnet-789448 sudo cri-dockerd --version                                                                                                                       │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p kindnet-789448 sudo systemctl status containerd --all --full --no-pager                                                                                         │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │                     │
	│ ssh     │ -p kindnet-789448 sudo systemctl cat containerd --no-pager                                                                                                         │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p kindnet-789448 sudo cat /lib/systemd/system/containerd.service                                                                                                  │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p kindnet-789448 sudo cat /etc/containerd/config.toml                                                                                                             │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p kindnet-789448 sudo containerd config dump                                                                                                                      │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p kindnet-789448 sudo systemctl status crio --all --full --no-pager                                                                                               │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p kindnet-789448 sudo systemctl cat crio --no-pager                                                                                                               │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p kindnet-789448 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                     │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p kindnet-789448 sudo crio config                                                                                                                                 │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ delete  │ -p kindnet-789448                                                                                                                                                  │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ delete  │ -p default-k8s-diff-port-433034                                                                                                                                    │ default-k8s-diff-port-433034 │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ start   │ -p custom-flannel-789448 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio │ custom-flannel-789448        │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │                     │
	│ start   │ -p enable-default-cni-789448 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio    │ enable-default-cni-789448    │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │                     │
	│ image   │ embed-certs-399565 image list --format=json                                                                                                                        │ embed-certs-399565           │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ pause   │ -p embed-certs-399565 --alsologtostderr -v=1                                                                                                                       │ embed-certs-399565           │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:12:51
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:12:51.355797  336891 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:12:51.355936  336891 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:12:51.355945  336891 out.go:374] Setting ErrFile to fd 2...
	I1212 20:12:51.355949  336891 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:12:51.356168  336891 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 20:12:51.356806  336891 out.go:368] Setting JSON to false
	I1212 20:12:51.358406  336891 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3318,"bootTime":1765567053,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 20:12:51.358488  336891 start.go:143] virtualization: kvm guest
	I1212 20:12:51.360659  336891 out.go:179] * [enable-default-cni-789448] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 20:12:51.363063  336891 notify.go:221] Checking for updates...
	I1212 20:12:51.363078  336891 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:12:51.369175  336891 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:12:51.380935  336891 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 20:12:51.382793  336891 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-5703/.minikube
	I1212 20:12:51.384626  336891 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 20:12:51.386616  336891 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:12:51.390022  336891 config.go:182] Loaded profile config "calico-789448": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:12:51.390168  336891 config.go:182] Loaded profile config "custom-flannel-789448": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:12:51.390333  336891 config.go:182] Loaded profile config "embed-certs-399565": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:12:51.390482  336891 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:12:51.415073  336891 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1212 20:12:51.415183  336891 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:12:51.471467  336891 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:54 OomKillDisable:false NGoroutines:70 SystemTime:2025-12-12 20:12:51.460942874 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 20:12:51.471604  336891 docker.go:319] overlay module found
	I1212 20:12:51.533907  336891 out.go:179] * Using the docker driver based on user configuration
	I1212 20:12:51.567303  336891 start.go:309] selected driver: docker
	I1212 20:12:51.567327  336891 start.go:927] validating driver "docker" against <nil>
	I1212 20:12:51.567343  336891 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:12:51.568178  336891 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:12:51.655078  336891 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:54 OomKillDisable:false NGoroutines:70 SystemTime:2025-12-12 20:12:51.644484424 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 20:12:51.655237  336891 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	E1212 20:12:51.655454  336891 start_flags.go:481] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1212 20:12:51.655493  336891 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 20:12:51.744440  336891 out.go:179] * Using Docker driver with root privileges
	I1212 20:12:51.883950  336891 cni.go:84] Creating CNI manager for "bridge"
	I1212 20:12:51.883978  336891 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1212 20:12:51.884083  336891 start.go:353] cluster config:
	{Name:enable-default-cni-789448 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-789448 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:12:52.001160  336891 out.go:179] * Starting "enable-default-cni-789448" primary control-plane node in "enable-default-cni-789448" cluster
	I1212 20:12:52.022457  336891 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 20:12:52.052181  336891 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 20:12:52.054198  336891 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 20:12:52.054225  336891 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:12:52.054265  336891 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1212 20:12:52.054304  336891 cache.go:65] Caching tarball of preloaded images
	I1212 20:12:52.054407  336891 preload.go:238] Found /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 20:12:52.054423  336891 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 20:12:52.054557  336891 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/enable-default-cni-789448/config.json ...
	I1212 20:12:52.054580  336891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/enable-default-cni-789448/config.json: {Name:mk9af81fd3c366863103700101a946fbb98b8a7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:12:52.078659  336891 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 20:12:52.078686  336891 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 20:12:52.078705  336891 cache.go:243] Successfully downloaded all kic artifacts
	I1212 20:12:52.078742  336891 start.go:360] acquireMachinesLock for enable-default-cni-789448: {Name:mk7568efdf4bfb1424aecc843b664068d92fdce8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:12:52.078844  336891 start.go:364] duration metric: took 82.123µs to acquireMachinesLock for "enable-default-cni-789448"
	I1212 20:12:52.078878  336891 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-789448 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-789448 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:12:52.078984  336891 start.go:125] createHost starting for "" (driver="docker")
	I1212 20:12:48.442428  335976 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1212 20:12:48.442665  335976 start.go:159] libmachine.API.Create for "custom-flannel-789448" (driver="docker")
	I1212 20:12:48.442698  335976 client.go:173] LocalClient.Create starting
	I1212 20:12:48.442767  335976 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem
	I1212 20:12:48.442803  335976 main.go:143] libmachine: Decoding PEM data...
	I1212 20:12:48.442824  335976 main.go:143] libmachine: Parsing certificate...
	I1212 20:12:48.442901  335976 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem
	I1212 20:12:48.442929  335976 main.go:143] libmachine: Decoding PEM data...
	I1212 20:12:48.442952  335976 main.go:143] libmachine: Parsing certificate...
	I1212 20:12:48.443428  335976 cli_runner.go:164] Run: docker network inspect custom-flannel-789448 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 20:12:48.459456  335976 cli_runner.go:211] docker network inspect custom-flannel-789448 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 20:12:48.459526  335976 network_create.go:284] running [docker network inspect custom-flannel-789448] to gather additional debugging logs...
	I1212 20:12:48.459542  335976 cli_runner.go:164] Run: docker network inspect custom-flannel-789448
	W1212 20:12:48.476424  335976 cli_runner.go:211] docker network inspect custom-flannel-789448 returned with exit code 1
	I1212 20:12:48.476451  335976 network_create.go:287] error running [docker network inspect custom-flannel-789448]: docker network inspect custom-flannel-789448: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network custom-flannel-789448 not found
	I1212 20:12:48.476489  335976 network_create.go:289] output of [docker network inspect custom-flannel-789448]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network custom-flannel-789448 not found
	
	** /stderr **
	I1212 20:12:48.476608  335976 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:12:48.495070  335976 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-74442dadd84e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:ff:80:da:a9:72} reservation:<nil>}
	I1212 20:12:48.495711  335976 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-26148288ab51 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:82:49:cc:21:29:a7} reservation:<nil>}
	I1212 20:12:48.496386  335976 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3684d3b926aa IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2a:5e:c7:18:99:d2} reservation:<nil>}
	I1212 20:12:48.496860  335976 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c165baeec493 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ea:78:86:50:3b:d1} reservation:<nil>}
	I1212 20:12:48.497472  335976 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-8b25279d9256 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:fe:09:dd:81:85:14} reservation:<nil>}
	I1212 20:12:48.497991  335976 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-6c29c7e79781 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:c6:0e:96:4d:9c:d8} reservation:<nil>}
	I1212 20:12:48.498824  335976 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f33340}
	I1212 20:12:48.498850  335976 network_create.go:124] attempt to create docker network custom-flannel-789448 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1212 20:12:48.498888  335976 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-789448 custom-flannel-789448
	I1212 20:12:48.546232  335976 network_create.go:108] docker network custom-flannel-789448 192.168.103.0/24 created
	I1212 20:12:48.546258  335976 kic.go:121] calculated static IP "192.168.103.2" for the "custom-flannel-789448" container
	I1212 20:12:48.546363  335976 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 20:12:48.563357  335976 cli_runner.go:164] Run: docker volume create custom-flannel-789448 --label name.minikube.sigs.k8s.io=custom-flannel-789448 --label created_by.minikube.sigs.k8s.io=true
	I1212 20:12:48.580370  335976 oci.go:103] Successfully created a docker volume custom-flannel-789448
	I1212 20:12:48.580442  335976 cli_runner.go:164] Run: docker run --rm --name custom-flannel-789448-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-789448 --entrypoint /usr/bin/test -v custom-flannel-789448:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -d /var/lib
	I1212 20:12:49.106760  335976 oci.go:107] Successfully prepared a docker volume custom-flannel-789448
	I1212 20:12:49.106822  335976 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:12:49.106831  335976 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 20:12:49.106887  335976 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-789448:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 20:12:52.402928  335976 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-789448:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -I lz4 -xf /preloaded.tar -C /extractDir: (3.296003637s)
	I1212 20:12:52.402957  335976 kic.go:203] duration metric: took 3.296122938s to extract preloaded images to volume ...
	W1212 20:12:52.403034  335976 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1212 20:12:52.403062  335976 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1212 20:12:52.403095  335976 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 20:12:52.480350  335976 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-flannel-789448 --name custom-flannel-789448 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-789448 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-flannel-789448 --network custom-flannel-789448 --ip 192.168.103.2 --volume custom-flannel-789448:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138
	I1212 20:12:52.784772  335976 cli_runner.go:164] Run: docker container inspect custom-flannel-789448 --format={{.State.Running}}
	I1212 20:12:52.804897  335976 cli_runner.go:164] Run: docker container inspect custom-flannel-789448 --format={{.State.Status}}
	I1212 20:12:52.825396  335976 cli_runner.go:164] Run: docker exec custom-flannel-789448 stat /var/lib/dpkg/alternatives/iptables
	I1212 20:12:52.871228  335976 oci.go:144] the created container "custom-flannel-789448" has a running status.
	I1212 20:12:52.871260  335976 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22112-5703/.minikube/machines/custom-flannel-789448/id_rsa...
	I1212 20:12:53.144603  335976 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22112-5703/.minikube/machines/custom-flannel-789448/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 20:12:53.175712  335976 cli_runner.go:164] Run: docker container inspect custom-flannel-789448 --format={{.State.Status}}
	I1212 20:12:53.192910  335976 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 20:12:53.192933  335976 kic_runner.go:114] Args: [docker exec --privileged custom-flannel-789448 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 20:12:53.242056  335976 cli_runner.go:164] Run: docker container inspect custom-flannel-789448 --format={{.State.Status}}
	I1212 20:12:49.409825  325830 system_pods.go:86] 9 kube-system pods found
	I1212 20:12:49.409866  325830 system_pods.go:89] "calico-kube-controllers-5c676f698c-qdgmz" [c3d8ac3e-005c-455c-ac2e-9c6dfa1530f8] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1212 20:12:49.409878  325830 system_pods.go:89] "calico-node-hd58q" [b41b3fa4-ac0d-4f46-9ac4-c0795974470f] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1212 20:12:49.409889  325830 system_pods.go:89] "coredns-66bc5c9577-dd2kh" [c32974a0-1ca5-4be4-8cf3-9675fe0cf798] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 20:12:49.409895  325830 system_pods.go:89] "etcd-calico-789448" [5de1a358-4540-4574-a1a3-b3855050644c] Running
	I1212 20:12:49.409901  325830 system_pods.go:89] "kube-apiserver-calico-789448" [fc139bf1-f9df-41ac-b6a7-b276ad25c967] Running
	I1212 20:12:49.409907  325830 system_pods.go:89] "kube-controller-manager-calico-789448" [e7069273-5492-418b-934f-dd1db739703f] Running
	I1212 20:12:49.409921  325830 system_pods.go:89] "kube-proxy-7xrs6" [c9397fe0-32dc-4e30-8c91-50e2b674114e] Running
	I1212 20:12:49.409927  325830 system_pods.go:89] "kube-scheduler-calico-789448" [38bb66c5-31b4-48c4-85db-8a575dcfc930] Running
	I1212 20:12:49.409933  325830 system_pods.go:89] "storage-provisioner" [eda19077-6582-4fe8-93fe-a84f33e5d168] Running
	I1212 20:12:49.409952  325830 retry.go:31] will retry after 2.235239313s: missing components: kube-dns
	I1212 20:12:51.651471  325830 system_pods.go:86] 9 kube-system pods found
	I1212 20:12:51.651510  325830 system_pods.go:89] "calico-kube-controllers-5c676f698c-qdgmz" [c3d8ac3e-005c-455c-ac2e-9c6dfa1530f8] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1212 20:12:51.651521  325830 system_pods.go:89] "calico-node-hd58q" [b41b3fa4-ac0d-4f46-9ac4-c0795974470f] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1212 20:12:51.651532  325830 system_pods.go:89] "coredns-66bc5c9577-dd2kh" [c32974a0-1ca5-4be4-8cf3-9675fe0cf798] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 20:12:51.651537  325830 system_pods.go:89] "etcd-calico-789448" [5de1a358-4540-4574-a1a3-b3855050644c] Running
	I1212 20:12:51.651547  325830 system_pods.go:89] "kube-apiserver-calico-789448" [fc139bf1-f9df-41ac-b6a7-b276ad25c967] Running
	I1212 20:12:51.651552  325830 system_pods.go:89] "kube-controller-manager-calico-789448" [e7069273-5492-418b-934f-dd1db739703f] Running
	I1212 20:12:51.651557  325830 system_pods.go:89] "kube-proxy-7xrs6" [c9397fe0-32dc-4e30-8c91-50e2b674114e] Running
	I1212 20:12:51.651564  325830 system_pods.go:89] "kube-scheduler-calico-789448" [38bb66c5-31b4-48c4-85db-8a575dcfc930] Running
	I1212 20:12:51.651569  325830 system_pods.go:89] "storage-provisioner" [eda19077-6582-4fe8-93fe-a84f33e5d168] Running
	I1212 20:12:51.651585  325830 retry.go:31] will retry after 3.463747764s: missing components: kube-dns
	
	
	==> CRI-O <==
	Dec 12 20:12:29 embed-certs-399565 crio[558]: time="2025-12-12T20:12:29.168194048Z" level=info msg="Started container" PID=1763 containerID=1c049a6cbb92a22cb990bedef4160dcca97ade070c72be866762422db46af0ee description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zdjv/dashboard-metrics-scraper id=0ee9dab5-e239-49e6-b80b-4ad04b68e475 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6768f58e387b134d99847ce544a33bb5b8de103a78c4f499a06c59a0b2629744
	Dec 12 20:12:29 embed-certs-399565 crio[558]: time="2025-12-12T20:12:29.205804647Z" level=info msg="Removing container: 3e03d3a5fc0cd47838c3636121206d85d238ade4100760dd8db8e3d4a6026715" id=65c5d21e-344c-42be-90ce-0659d138af82 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 20:12:29 embed-certs-399565 crio[558]: time="2025-12-12T20:12:29.227094189Z" level=info msg="Removed container 3e03d3a5fc0cd47838c3636121206d85d238ade4100760dd8db8e3d4a6026715: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zdjv/dashboard-metrics-scraper" id=65c5d21e-344c-42be-90ce-0659d138af82 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 20:12:36 embed-certs-399565 crio[558]: time="2025-12-12T20:12:36.22473963Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f8caff44-ef12-4545-bcfd-a2233955ceb7 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:12:36 embed-certs-399565 crio[558]: time="2025-12-12T20:12:36.225746445Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a3d8ffc4-853c-4763-98d6-fa8558c2dc80 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:12:36 embed-certs-399565 crio[558]: time="2025-12-12T20:12:36.227013842Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=bd29cc24-1e3e-48d7-bc6f-69a7fa5cac2d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:12:36 embed-certs-399565 crio[558]: time="2025-12-12T20:12:36.227137258Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:12:36 embed-certs-399565 crio[558]: time="2025-12-12T20:12:36.231674933Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:12:36 embed-certs-399565 crio[558]: time="2025-12-12T20:12:36.231853714Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/8f7a4bec6796f77ab0b33ef07faa4cabdaae05b37118a814c108c8049102b004/merged/etc/passwd: no such file or directory"
	Dec 12 20:12:36 embed-certs-399565 crio[558]: time="2025-12-12T20:12:36.231882589Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/8f7a4bec6796f77ab0b33ef07faa4cabdaae05b37118a814c108c8049102b004/merged/etc/group: no such file or directory"
	Dec 12 20:12:36 embed-certs-399565 crio[558]: time="2025-12-12T20:12:36.232189663Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:12:36 embed-certs-399565 crio[558]: time="2025-12-12T20:12:36.264147741Z" level=info msg="Created container e860f29f72e6009ea5335c8f0c2d97f95e18316d247d5e91ebfd2df64073c48c: kube-system/storage-provisioner/storage-provisioner" id=bd29cc24-1e3e-48d7-bc6f-69a7fa5cac2d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:12:36 embed-certs-399565 crio[558]: time="2025-12-12T20:12:36.265026205Z" level=info msg="Starting container: e860f29f72e6009ea5335c8f0c2d97f95e18316d247d5e91ebfd2df64073c48c" id=d987759b-6f99-40de-8936-658eecaa4d78 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 20:12:36 embed-certs-399565 crio[558]: time="2025-12-12T20:12:36.267556116Z" level=info msg="Started container" PID=1777 containerID=e860f29f72e6009ea5335c8f0c2d97f95e18316d247d5e91ebfd2df64073c48c description=kube-system/storage-provisioner/storage-provisioner id=d987759b-6f99-40de-8936-658eecaa4d78 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f90b800765f1bc01c54708de39ff9dd1ac4d3c7d95826bb026b4012ffae58461
	Dec 12 20:12:51 embed-certs-399565 crio[558]: time="2025-12-12T20:12:51.100016616Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b72bee4b-2e78-468f-95b3-83180ddb1d78 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:12:51 embed-certs-399565 crio[558]: time="2025-12-12T20:12:51.101126019Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=6cdf8fa4-8de2-4452-97c6-60435d76bc1d name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:12:51 embed-certs-399565 crio[558]: time="2025-12-12T20:12:51.102396802Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zdjv/dashboard-metrics-scraper" id=425c7a8e-5c68-426a-ac80-b809d74803c3 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:12:51 embed-certs-399565 crio[558]: time="2025-12-12T20:12:51.102536772Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:12:51 embed-certs-399565 crio[558]: time="2025-12-12T20:12:51.108697948Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:12:51 embed-certs-399565 crio[558]: time="2025-12-12T20:12:51.109133452Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:12:51 embed-certs-399565 crio[558]: time="2025-12-12T20:12:51.144799021Z" level=info msg="Created container a02bc8eea52d1f29eb5e3c408bb339f74803db1963fb930969f4a47fe8df68b7: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zdjv/dashboard-metrics-scraper" id=425c7a8e-5c68-426a-ac80-b809d74803c3 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:12:51 embed-certs-399565 crio[558]: time="2025-12-12T20:12:51.14546538Z" level=info msg="Starting container: a02bc8eea52d1f29eb5e3c408bb339f74803db1963fb930969f4a47fe8df68b7" id=b1d74369-1c9f-469f-a704-941c148f3c5d name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 20:12:51 embed-certs-399565 crio[558]: time="2025-12-12T20:12:51.147519393Z" level=info msg="Started container" PID=1811 containerID=a02bc8eea52d1f29eb5e3c408bb339f74803db1963fb930969f4a47fe8df68b7 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zdjv/dashboard-metrics-scraper id=b1d74369-1c9f-469f-a704-941c148f3c5d name=/runtime.v1.RuntimeService/StartContainer sandboxID=6768f58e387b134d99847ce544a33bb5b8de103a78c4f499a06c59a0b2629744
	Dec 12 20:12:51 embed-certs-399565 crio[558]: time="2025-12-12T20:12:51.27276927Z" level=info msg="Removing container: 1c049a6cbb92a22cb990bedef4160dcca97ade070c72be866762422db46af0ee" id=8ea54dfa-f0f3-4fb8-a9aa-0b9fd6e7f048 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 20:12:51 embed-certs-399565 crio[558]: time="2025-12-12T20:12:51.290151997Z" level=info msg="Removed container 1c049a6cbb92a22cb990bedef4160dcca97ade070c72be866762422db46af0ee: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zdjv/dashboard-metrics-scraper" id=8ea54dfa-f0f3-4fb8-a9aa-0b9fd6e7f048 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	a02bc8eea52d1       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           5 seconds ago       Exited              dashboard-metrics-scraper   3                   6768f58e387b1       dashboard-metrics-scraper-6ffb444bf9-8zdjv   kubernetes-dashboard
	e860f29f72e60       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           19 seconds ago      Running             storage-provisioner         1                   f90b800765f1b       storage-provisioner                          kube-system
	cc941461156ed       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   40 seconds ago      Running             kubernetes-dashboard        0                   e1b0af75b8227       kubernetes-dashboard-855c9754f9-hwvvn        kubernetes-dashboard
	44c3d08146af4       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           50 seconds ago      Running             coredns                     0                   6111d79a1cb8b       coredns-66bc5c9577-zg2v9                     kube-system
	7be83d0db46c3       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   5bb835dcb0967       busybox                                      default
	9392cc35dd05a       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           50 seconds ago      Running             kube-proxy                  0                   1b05731117e62       kube-proxy-xgs9b                             kube-system
	db7cce6e798bc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   f90b800765f1b       storage-provisioner                          kube-system
	e41bd589f9d6e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   dc3a788c5b7cc       kindnet-5fbmr                                kube-system
	33c3adae59f67       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           53 seconds ago      Running             etcd                        0                   b19f9e833eb5b       etcd-embed-certs-399565                      kube-system
	71b86d1be5120       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           53 seconds ago      Running             kube-scheduler              0                   e3ff004f31075       kube-scheduler-embed-certs-399565            kube-system
	cb3e0992ee840       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           53 seconds ago      Running             kube-apiserver              0                   85d78661ad430       kube-apiserver-embed-certs-399565            kube-system
	19e9c49dfeed2       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           53 seconds ago      Running             kube-controller-manager     0                   e7ed395481671       kube-controller-manager-embed-certs-399565   kube-system
	
	
	==> coredns [44c3d08146af4a11949a7d4c4e0983875afd2b7ddd7a3408190d4ac8748b9d41] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37301 - 61963 "HINFO IN 4080172600938501915.6452157330515326122. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.109073884s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-399565
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-399565
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300
	                    minikube.k8s.io/name=embed-certs-399565
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T20_11_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 20:11:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-399565
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 20:12:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 20:12:35 +0000   Fri, 12 Dec 2025 20:11:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 20:12:35 +0000   Fri, 12 Dec 2025 20:11:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 20:12:35 +0000   Fri, 12 Dec 2025 20:11:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 20:12:35 +0000   Fri, 12 Dec 2025 20:11:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-399565
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c9831a2813dcaeb3cc17b596693b7bac
	  System UUID:                d4ee55d6-eeec-48fd-851e-1386ebc672fc
	  Boot ID:                    50e046d9-33b0-4107-918f-f0a8d9513b10
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-66bc5c9577-zg2v9                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     103s
	  kube-system                 etcd-embed-certs-399565                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         109s
	  kube-system                 kindnet-5fbmr                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      103s
	  kube-system                 kube-apiserver-embed-certs-399565             250m (3%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-controller-manager-embed-certs-399565    200m (2%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-proxy-xgs9b                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-scheduler-embed-certs-399565             100m (1%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-8zdjv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-hwvvn         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 102s               kube-proxy       
	  Normal  Starting                 50s                kube-proxy       
	  Normal  NodeHasSufficientMemory  109s               kubelet          Node embed-certs-399565 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    109s               kubelet          Node embed-certs-399565 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     109s               kubelet          Node embed-certs-399565 status is now: NodeHasSufficientPID
	  Normal  Starting                 109s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           105s               node-controller  Node embed-certs-399565 event: Registered Node embed-certs-399565 in Controller
	  Normal  NodeReady                92s                kubelet          Node embed-certs-399565 status is now: NodeReady
	  Normal  Starting                 54s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)  kubelet          Node embed-certs-399565 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)  kubelet          Node embed-certs-399565 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)  kubelet          Node embed-certs-399565 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           49s                node-controller  Node embed-certs-399565 event: Registered Node embed-certs-399565 in Controller
	
	
	==> dmesg <==
	[  +0.086327] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026088] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.855140] kauditd_printk_skb: 47 callbacks suppressed
	[Dec12 19:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.016395] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023890] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023861] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023912] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023894] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +2.047754] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +4.031589] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +8.448131] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[Dec12 19:32] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[ +32.252714] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	
	
	==> etcd [33c3adae59f67985263e48c4dcbeb792ce1fb117cf8d4ff5efb24caa08cbb03d] <==
	{"level":"warn","ts":"2025-12-12T20:12:03.991684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:12:03.998810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:12:04.005382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:12:04.012897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:12:04.028942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:12:04.035808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:12:04.042117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:12:04.098028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39870","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-12T20:12:12.269651Z","caller":"traceutil/trace.go:172","msg":"trace[414632648] transaction","detail":"{read_only:false; response_revision:577; number_of_response:1; }","duration":"103.257689ms","start":"2025-12-12T20:12:12.166374Z","end":"2025-12-12T20:12:12.269632Z","steps":["trace[414632648] 'process raft request'  (duration: 81.276257ms)","trace[414632648] 'compare'  (duration: 21.882809ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-12T20:12:12.641226Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"178.207144ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766792289575087 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-399565\" mod_revision:575 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-399565\" value_size:7926 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-399565\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-12T20:12:12.641406Z","caller":"traceutil/trace.go:172","msg":"trace[2072466225] transaction","detail":"{read_only:false; response_revision:578; number_of_response:1; }","duration":"360.882281ms","start":"2025-12-12T20:12:12.280481Z","end":"2025-12-12T20:12:12.641363Z","steps":["trace[2072466225] 'process raft request'  (duration: 182.064815ms)","trace[2072466225] 'compare'  (duration: 178.130424ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-12T20:12:12.641512Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-12T20:12:12.280464Z","time spent":"361.000228ms","remote":"127.0.0.1:39074","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":7994,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-399565\" mod_revision:575 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-399565\" value_size:7926 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-399565\" > >"}
	{"level":"info","ts":"2025-12-12T20:12:13.174710Z","caller":"traceutil/trace.go:172","msg":"trace[569438513] transaction","detail":"{read_only:false; response_revision:580; number_of_response:1; }","duration":"118.956074ms","start":"2025-12-12T20:12:13.055737Z","end":"2025-12-12T20:12:13.174693Z","steps":["trace[569438513] 'process raft request'  (duration: 118.759588ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T20:12:13.314047Z","caller":"traceutil/trace.go:172","msg":"trace[1106449450] linearizableReadLoop","detail":"{readStateIndex:609; appliedIndex:609; }","duration":"137.163446ms","start":"2025-12-12T20:12:13.176857Z","end":"2025-12-12T20:12:13.314021Z","steps":["trace[1106449450] 'read index received'  (duration: 137.156841ms)","trace[1106449450] 'applied index is now lower than readState.Index'  (duration: 5.626µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-12T20:12:13.437504Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"260.591892ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-12T20:12:13.437604Z","caller":"traceutil/trace.go:172","msg":"trace[633390736] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:580; }","duration":"260.726514ms","start":"2025-12-12T20:12:13.176853Z","end":"2025-12-12T20:12:13.437580Z","steps":["trace[633390736] 'agreement among raft nodes before linearized reading'  (duration: 137.258708ms)","trace[633390736] 'range keys from in-memory index tree'  (duration: 123.302841ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-12T20:12:13.437701Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.530926ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766792289575099 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zdjv.188090dcda47fefe\" mod_revision:576 > success:<request_put:<key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zdjv.188090dcda47fefe\" value_size:875 lease:6571766792289574887 >> failure:<request_range:<key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zdjv.188090dcda47fefe\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-12T20:12:13.437778Z","caller":"traceutil/trace.go:172","msg":"trace[703351354] linearizableReadLoop","detail":"{readStateIndex:610; appliedIndex:609; }","duration":"123.646238ms","start":"2025-12-12T20:12:13.314121Z","end":"2025-12-12T20:12:13.437767Z","steps":["trace[703351354] 'read index received'  (duration: 28.066µs)","trace[703351354] 'applied index is now lower than readState.Index'  (duration: 123.617145ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-12T20:12:13.437799Z","caller":"traceutil/trace.go:172","msg":"trace[938507542] transaction","detail":"{read_only:false; response_revision:581; number_of_response:1; }","duration":"261.221326ms","start":"2025-12-12T20:12:13.176554Z","end":"2025-12-12T20:12:13.437775Z","steps":["trace[938507542] 'process raft request'  (duration: 137.549339ms)","trace[938507542] 'compare'  (duration: 123.374738ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-12T20:12:13.437910Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"259.872713ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zdjv\" limit:1 ","response":"range_response_count:1 size:4721"}
	{"level":"warn","ts":"2025-12-12T20:12:13.437966Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"259.391319ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-399565\" limit:1 ","response":"range_response_count:1 size:5708"}
	{"level":"info","ts":"2025-12-12T20:12:13.438008Z","caller":"traceutil/trace.go:172","msg":"trace[1102066518] range","detail":"{range_begin:/registry/minions/embed-certs-399565; range_end:; response_count:1; response_revision:581; }","duration":"259.438187ms","start":"2025-12-12T20:12:13.178561Z","end":"2025-12-12T20:12:13.437999Z","steps":["trace[1102066518] 'agreement among raft nodes before linearized reading'  (duration: 259.303932ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T20:12:13.437986Z","caller":"traceutil/trace.go:172","msg":"trace[489115295] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zdjv; range_end:; response_count:1; response_revision:581; }","duration":"259.922261ms","start":"2025-12-12T20:12:13.178010Z","end":"2025-12-12T20:12:13.437932Z","steps":["trace[489115295] 'agreement among raft nodes before linearized reading'  (duration: 259.788263ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T20:12:13.598676Z","caller":"traceutil/trace.go:172","msg":"trace[1946495577] transaction","detail":"{read_only:false; response_revision:582; number_of_response:1; }","duration":"153.717879ms","start":"2025-12-12T20:12:13.444939Z","end":"2025-12-12T20:12:13.598657Z","steps":["trace[1946495577] 'process raft request'  (duration: 122.640464ms)","trace[1946495577] 'compare'  (duration: 30.983666ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-12T20:12:45.872714Z","caller":"traceutil/trace.go:172","msg":"trace[2116460126] transaction","detail":"{read_only:false; response_revision:629; number_of_response:1; }","duration":"114.981875ms","start":"2025-12-12T20:12:45.757707Z","end":"2025-12-12T20:12:45.872689Z","steps":["trace[2116460126] 'process raft request'  (duration: 47.561149ms)","trace[2116460126] 'compare'  (duration: 67.316754ms)"],"step_count":2}
	
	
	==> kernel <==
	 20:12:56 up 55 min,  0 user,  load average: 5.26, 3.29, 2.04
	Linux embed-certs-399565 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e41bd589f9d6e6b003b2c73ebdd9a095cb6e17b960d1b4da2b23c408cf0cb8ab] <==
	I1212 20:12:05.670705       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1212 20:12:05.670961       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1212 20:12:05.671124       1 main.go:148] setting mtu 1500 for CNI 
	I1212 20:12:05.671157       1 main.go:178] kindnetd IP family: "ipv4"
	I1212 20:12:05.671193       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-12T20:12:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1212 20:12:05.870265       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1212 20:12:05.870370       1 controller.go:381] "Waiting for informer caches to sync"
	I1212 20:12:05.870390       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1212 20:12:05.933724       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1212 20:12:06.433684       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1212 20:12:06.433711       1 metrics.go:72] Registering metrics
	I1212 20:12:06.433783       1 controller.go:711] "Syncing nftables rules"
	I1212 20:12:15.870658       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1212 20:12:15.870725       1 main.go:301] handling current node
	I1212 20:12:25.872378       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1212 20:12:25.872407       1 main.go:301] handling current node
	I1212 20:12:35.870846       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1212 20:12:35.870890       1 main.go:301] handling current node
	I1212 20:12:45.876346       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1212 20:12:45.876400       1 main.go:301] handling current node
	I1212 20:12:55.873393       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1212 20:12:55.873429       1 main.go:301] handling current node
	
	
	==> kube-apiserver [cb3e0992ee8404f1e2603f20c229f9311a1e0d6209d65cd7c650f29bc80627f2] <==
	I1212 20:12:04.600601       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1212 20:12:04.600639       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1212 20:12:04.599345       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1212 20:12:04.599363       1 aggregator.go:171] initial CRD sync complete...
	I1212 20:12:04.601043       1 autoregister_controller.go:144] Starting autoregister controller
	I1212 20:12:04.599379       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1212 20:12:04.601101       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1212 20:12:04.601148       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 20:12:04.601171       1 cache.go:39] Caches are synced for autoregister controller
	E1212 20:12:04.605390       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1212 20:12:04.606682       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1212 20:12:04.650056       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 20:12:04.657260       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 20:12:04.895487       1 controller.go:667] quota admission added evaluator for: namespaces
	I1212 20:12:04.923178       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1212 20:12:04.945511       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 20:12:04.951338       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 20:12:04.958194       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1212 20:12:04.990892       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.241.9"}
	I1212 20:12:04.998905       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.170.12"}
	I1212 20:12:05.503481       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 20:12:08.328836       1 controller.go:667] quota admission added evaluator for: endpoints
	I1212 20:12:08.328895       1 controller.go:667] quota admission added evaluator for: endpoints
	I1212 20:12:08.379661       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1212 20:12:08.529126       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [19e9c49dfeed28b153429669e7b559f50cc7919c3030acc9d2ad133418bc6615] <==
	I1212 20:12:07.781447       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 20:12:07.782519       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1212 20:12:07.783656       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1212 20:12:07.785882       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1212 20:12:07.788016       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1212 20:12:07.788098       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1212 20:12:07.788167       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-399565"
	I1212 20:12:07.788218       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1212 20:12:07.791347       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1212 20:12:07.793607       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1212 20:12:07.794820       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1212 20:12:07.806166       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1212 20:12:07.826116       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1212 20:12:07.826141       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1212 20:12:07.826163       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1212 20:12:07.826258       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1212 20:12:07.826305       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1212 20:12:07.826261       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1212 20:12:07.831517       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 20:12:07.831540       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1212 20:12:07.942415       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1212 20:12:08.025983       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1212 20:12:08.026009       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1212 20:12:08.026017       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1212 20:12:08.042906       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [9392cc35dd05aa59c29ff54e9dcf13b8a7dbd9d5eb3b57f5998b857dc3679304] <==
	I1212 20:12:05.492213       1 server_linux.go:53] "Using iptables proxy"
	I1212 20:12:05.554621       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1212 20:12:05.655723       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1212 20:12:05.655777       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1212 20:12:05.655869       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 20:12:05.677814       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 20:12:05.677870       1 server_linux.go:132] "Using iptables Proxier"
	I1212 20:12:05.683913       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 20:12:05.684364       1 server.go:527] "Version info" version="v1.34.2"
	I1212 20:12:05.684444       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 20:12:05.686318       1 config.go:200] "Starting service config controller"
	I1212 20:12:05.686335       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 20:12:05.686375       1 config.go:106] "Starting endpoint slice config controller"
	I1212 20:12:05.686382       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 20:12:05.686394       1 config.go:309] "Starting node config controller"
	I1212 20:12:05.686401       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 20:12:05.686407       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 20:12:05.686401       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 20:12:05.686587       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 20:12:05.787292       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1212 20:12:05.787318       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1212 20:12:05.787292       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [71b86d1be5120eb4253f8e9ab45b12d91a5b1989d2e35061f4250da25598d54b] <==
	I1212 20:12:03.375236       1 serving.go:386] Generated self-signed cert in-memory
	W1212 20:12:04.524816       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1212 20:12:04.524852       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 20:12:04.524866       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1212 20:12:04.524876       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1212 20:12:04.570188       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1212 20:12:04.570226       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 20:12:04.572989       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 20:12:04.573034       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 20:12:04.573458       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1212 20:12:04.573531       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1212 20:12:04.673928       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 12 20:12:08 embed-certs-399565 kubelet[722]: I1212 20:12:08.577881     722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/764cbf67-466b-495a-a5d8-bf8234eb5da2-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-hwvvn\" (UID: \"764cbf67-466b-495a-a5d8-bf8234eb5da2\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hwvvn"
	Dec 12 20:12:11 embed-certs-399565 kubelet[722]: I1212 20:12:11.153622     722 scope.go:117] "RemoveContainer" containerID="485caaea81b92fab60cd86ab36159709aef0f8e99eea6b0bea5849fd11e07b1e"
	Dec 12 20:12:12 embed-certs-399565 kubelet[722]: I1212 20:12:12.158332     722 scope.go:117] "RemoveContainer" containerID="485caaea81b92fab60cd86ab36159709aef0f8e99eea6b0bea5849fd11e07b1e"
	Dec 12 20:12:12 embed-certs-399565 kubelet[722]: I1212 20:12:12.158452     722 scope.go:117] "RemoveContainer" containerID="3e03d3a5fc0cd47838c3636121206d85d238ade4100760dd8db8e3d4a6026715"
	Dec 12 20:12:12 embed-certs-399565 kubelet[722]: E1212 20:12:12.158641     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8zdjv_kubernetes-dashboard(3a076791-6f20-4953-8a2f-940ff88fc5af)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zdjv" podUID="3a076791-6f20-4953-8a2f-940ff88fc5af"
	Dec 12 20:12:13 embed-certs-399565 kubelet[722]: I1212 20:12:13.163222     722 scope.go:117] "RemoveContainer" containerID="3e03d3a5fc0cd47838c3636121206d85d238ade4100760dd8db8e3d4a6026715"
	Dec 12 20:12:13 embed-certs-399565 kubelet[722]: E1212 20:12:13.163441     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8zdjv_kubernetes-dashboard(3a076791-6f20-4953-8a2f-940ff88fc5af)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zdjv" podUID="3a076791-6f20-4953-8a2f-940ff88fc5af"
	Dec 12 20:12:16 embed-certs-399565 kubelet[722]: I1212 20:12:16.212800     722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hwvvn" podStartSLOduration=1.572122381 podStartE2EDuration="8.212775303s" podCreationTimestamp="2025-12-12 20:12:08 +0000 UTC" firstStartedPulling="2025-12-12 20:12:08.777701914 +0000 UTC m=+6.777664319" lastFinishedPulling="2025-12-12 20:12:15.418354831 +0000 UTC m=+13.418317241" observedRunningTime="2025-12-12 20:12:16.212651473 +0000 UTC m=+14.212613885" watchObservedRunningTime="2025-12-12 20:12:16.212775303 +0000 UTC m=+14.212737743"
	Dec 12 20:12:18 embed-certs-399565 kubelet[722]: I1212 20:12:18.443224     722 scope.go:117] "RemoveContainer" containerID="3e03d3a5fc0cd47838c3636121206d85d238ade4100760dd8db8e3d4a6026715"
	Dec 12 20:12:18 embed-certs-399565 kubelet[722]: E1212 20:12:18.443472     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8zdjv_kubernetes-dashboard(3a076791-6f20-4953-8a2f-940ff88fc5af)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zdjv" podUID="3a076791-6f20-4953-8a2f-940ff88fc5af"
	Dec 12 20:12:29 embed-certs-399565 kubelet[722]: I1212 20:12:29.099791     722 scope.go:117] "RemoveContainer" containerID="3e03d3a5fc0cd47838c3636121206d85d238ade4100760dd8db8e3d4a6026715"
	Dec 12 20:12:29 embed-certs-399565 kubelet[722]: I1212 20:12:29.204593     722 scope.go:117] "RemoveContainer" containerID="3e03d3a5fc0cd47838c3636121206d85d238ade4100760dd8db8e3d4a6026715"
	Dec 12 20:12:29 embed-certs-399565 kubelet[722]: I1212 20:12:29.204823     722 scope.go:117] "RemoveContainer" containerID="1c049a6cbb92a22cb990bedef4160dcca97ade070c72be866762422db46af0ee"
	Dec 12 20:12:29 embed-certs-399565 kubelet[722]: E1212 20:12:29.205017     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8zdjv_kubernetes-dashboard(3a076791-6f20-4953-8a2f-940ff88fc5af)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zdjv" podUID="3a076791-6f20-4953-8a2f-940ff88fc5af"
	Dec 12 20:12:36 embed-certs-399565 kubelet[722]: I1212 20:12:36.224317     722 scope.go:117] "RemoveContainer" containerID="db7cce6e798bcb16ec89d7b8cb54237dc498e9c99560888d6461c7b2f3a028aa"
	Dec 12 20:12:38 embed-certs-399565 kubelet[722]: I1212 20:12:38.443834     722 scope.go:117] "RemoveContainer" containerID="1c049a6cbb92a22cb990bedef4160dcca97ade070c72be866762422db46af0ee"
	Dec 12 20:12:38 embed-certs-399565 kubelet[722]: E1212 20:12:38.444057     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8zdjv_kubernetes-dashboard(3a076791-6f20-4953-8a2f-940ff88fc5af)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zdjv" podUID="3a076791-6f20-4953-8a2f-940ff88fc5af"
	Dec 12 20:12:51 embed-certs-399565 kubelet[722]: I1212 20:12:51.099501     722 scope.go:117] "RemoveContainer" containerID="1c049a6cbb92a22cb990bedef4160dcca97ade070c72be866762422db46af0ee"
	Dec 12 20:12:51 embed-certs-399565 kubelet[722]: I1212 20:12:51.271267     722 scope.go:117] "RemoveContainer" containerID="1c049a6cbb92a22cb990bedef4160dcca97ade070c72be866762422db46af0ee"
	Dec 12 20:12:51 embed-certs-399565 kubelet[722]: I1212 20:12:51.271555     722 scope.go:117] "RemoveContainer" containerID="a02bc8eea52d1f29eb5e3c408bb339f74803db1963fb930969f4a47fe8df68b7"
	Dec 12 20:12:51 embed-certs-399565 kubelet[722]: E1212 20:12:51.272078     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8zdjv_kubernetes-dashboard(3a076791-6f20-4953-8a2f-940ff88fc5af)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zdjv" podUID="3a076791-6f20-4953-8a2f-940ff88fc5af"
	Dec 12 20:12:52 embed-certs-399565 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 12 20:12:52 embed-certs-399565 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 12 20:12:52 embed-certs-399565 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:12:52 embed-certs-399565 systemd[1]: kubelet.service: Consumed 1.644s CPU time.
	
	
	==> kubernetes-dashboard [cc941461156ed0cf714b01a2f22277d720bedfc937d891d687f0e2e22e6b697a] <==
	2025/12/12 20:12:15 Starting overwatch
	2025/12/12 20:12:15 Using namespace: kubernetes-dashboard
	2025/12/12 20:12:15 Using in-cluster config to connect to apiserver
	2025/12/12 20:12:15 Using secret token for csrf signing
	2025/12/12 20:12:15 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/12 20:12:15 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/12 20:12:15 Successful initial request to the apiserver, version: v1.34.2
	2025/12/12 20:12:15 Generating JWE encryption key
	2025/12/12 20:12:15 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/12 20:12:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/12 20:12:15 Initializing JWE encryption key from synchronized object
	2025/12/12 20:12:15 Creating in-cluster Sidecar client
	2025/12/12 20:12:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/12 20:12:15 Serving insecurely on HTTP port: 9090
	2025/12/12 20:12:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [db7cce6e798bcb16ec89d7b8cb54237dc498e9c99560888d6461c7b2f3a028aa] <==
	I1212 20:12:05.454248       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1212 20:12:35.456479       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e860f29f72e6009ea5335c8f0c2d97f95e18316d247d5e91ebfd2df64073c48c] <==
	I1212 20:12:36.281719       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 20:12:36.290611       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 20:12:36.290659       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1212 20:12:36.293348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:39.748549       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:44.009381       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:47.607781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:50.661646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:53.683931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:53.688365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1212 20:12:53.688561       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 20:12:53.688620       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9ba9b917-4c14-4eae-ad77-6eb4b88284eb", APIVersion:"v1", ResourceVersion:"635", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-399565_a8811eac-b8b5-4196-a0ba-b80c5eebdfc8 became leader
	I1212 20:12:53.688667       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-399565_a8811eac-b8b5-4196-a0ba-b80c5eebdfc8!
	W1212 20:12:53.690523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:53.693419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1212 20:12:53.788790       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-399565_a8811eac-b8b5-4196-a0ba-b80c5eebdfc8!
	W1212 20:12:55.696530       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:55.718717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-399565 -n embed-certs-399565
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-399565 -n embed-certs-399565: exit status 2 (386.147093ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-399565 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-399565
helpers_test.go:244: (dbg) docker inspect embed-certs-399565:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "71e8830a236d9369e8cea7538408472309deacfa903143b0db3a316298f76bf1",
	        "Created": "2025-12-12T20:10:48.358308511Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 319632,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T20:11:54.738621896Z",
	            "FinishedAt": "2025-12-12T20:11:53.535155057Z"
	        },
	        "Image": "sha256:1ca69fb46d667873e297ff8975852d6be818eb624529e750e2b09ff0d3b0c367",
	        "ResolvConfPath": "/var/lib/docker/containers/71e8830a236d9369e8cea7538408472309deacfa903143b0db3a316298f76bf1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/71e8830a236d9369e8cea7538408472309deacfa903143b0db3a316298f76bf1/hostname",
	        "HostsPath": "/var/lib/docker/containers/71e8830a236d9369e8cea7538408472309deacfa903143b0db3a316298f76bf1/hosts",
	        "LogPath": "/var/lib/docker/containers/71e8830a236d9369e8cea7538408472309deacfa903143b0db3a316298f76bf1/71e8830a236d9369e8cea7538408472309deacfa903143b0db3a316298f76bf1-json.log",
	        "Name": "/embed-certs-399565",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-399565:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-399565",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "71e8830a236d9369e8cea7538408472309deacfa903143b0db3a316298f76bf1",
	                "LowerDir": "/var/lib/docker/overlay2/79b7657912b8e71e536eec636256b7f5706f9f6d36ba804943f0289661937da2-init/diff:/var/lib/docker/overlay2/87d8f1d6be832394c20ec55eb084b3f4a084ca786856584bd9e90cdb45432d3c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/79b7657912b8e71e536eec636256b7f5706f9f6d36ba804943f0289661937da2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/79b7657912b8e71e536eec636256b7f5706f9f6d36ba804943f0289661937da2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/79b7657912b8e71e536eec636256b7f5706f9f6d36ba804943f0289661937da2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-399565",
	                "Source": "/var/lib/docker/volumes/embed-certs-399565/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-399565",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-399565",
	                "name.minikube.sigs.k8s.io": "embed-certs-399565",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "1e877d741e865d60de3124fe8e3f1eda4c0a5f0a974eb066d270e2debe7c5f4d",
	            "SandboxKey": "/var/run/docker/netns/1e877d741e86",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-399565": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6c29c7e79781ac9639d4796d21d5075ddac5af9af8ecc99427d5e7f6d18273d7",
	                    "EndpointID": "aaee8a269cd5d72b4bb9707d59ae8c72a6e985a10bc8c08d588ddf26b7963dc4",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "92:25:52:65:d2:5a",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-399565",
	                        "71e8830a236d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-399565 -n embed-certs-399565
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-399565 -n embed-certs-399565: exit status 2 (353.45128ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-399565 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-399565 logs -n 25: (1.201986912s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kindnet-789448 sudo systemctl status docker --all --full --no-pager                                                                                             │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │                     │
	│ ssh     │ -p kindnet-789448 sudo systemctl cat docker --no-pager                                                                                                             │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p kindnet-789448 sudo cat /etc/docker/daemon.json                                                                                                                 │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │                     │
	│ ssh     │ -p kindnet-789448 sudo docker system info                                                                                                                          │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │                     │
	│ ssh     │ -p kindnet-789448 sudo systemctl status cri-docker --all --full --no-pager                                                                                         │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │                     │
	│ ssh     │ -p kindnet-789448 sudo systemctl cat cri-docker --no-pager                                                                                                         │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p kindnet-789448 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                    │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │                     │
	│ ssh     │ -p kindnet-789448 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                              │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ delete  │ -p default-k8s-diff-port-433034                                                                                                                                    │ default-k8s-diff-port-433034 │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p kindnet-789448 sudo cri-dockerd --version                                                                                                                       │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p kindnet-789448 sudo systemctl status containerd --all --full --no-pager                                                                                         │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │                     │
	│ ssh     │ -p kindnet-789448 sudo systemctl cat containerd --no-pager                                                                                                         │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p kindnet-789448 sudo cat /lib/systemd/system/containerd.service                                                                                                  │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p kindnet-789448 sudo cat /etc/containerd/config.toml                                                                                                             │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p kindnet-789448 sudo containerd config dump                                                                                                                      │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p kindnet-789448 sudo systemctl status crio --all --full --no-pager                                                                                               │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p kindnet-789448 sudo systemctl cat crio --no-pager                                                                                                               │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p kindnet-789448 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                     │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ ssh     │ -p kindnet-789448 sudo crio config                                                                                                                                 │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ delete  │ -p kindnet-789448                                                                                                                                                  │ kindnet-789448               │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ delete  │ -p default-k8s-diff-port-433034                                                                                                                                    │ default-k8s-diff-port-433034 │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ start   │ -p custom-flannel-789448 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio │ custom-flannel-789448        │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │                     │
	│ start   │ -p enable-default-cni-789448 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio    │ enable-default-cni-789448    │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │                     │
	│ image   │ embed-certs-399565 image list --format=json                                                                                                                        │ embed-certs-399565           │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │ 12 Dec 25 20:12 UTC │
	│ pause   │ -p embed-certs-399565 --alsologtostderr -v=1                                                                                                                       │ embed-certs-399565           │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:12:51
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:12:51.355797  336891 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:12:51.355936  336891 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:12:51.355945  336891 out.go:374] Setting ErrFile to fd 2...
	I1212 20:12:51.355949  336891 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:12:51.356168  336891 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 20:12:51.356806  336891 out.go:368] Setting JSON to false
	I1212 20:12:51.358406  336891 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3318,"bootTime":1765567053,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 20:12:51.358488  336891 start.go:143] virtualization: kvm guest
	I1212 20:12:51.360659  336891 out.go:179] * [enable-default-cni-789448] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 20:12:51.363063  336891 notify.go:221] Checking for updates...
	I1212 20:12:51.363078  336891 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:12:51.369175  336891 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:12:51.380935  336891 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 20:12:51.382793  336891 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-5703/.minikube
	I1212 20:12:51.384626  336891 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 20:12:51.386616  336891 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:12:51.390022  336891 config.go:182] Loaded profile config "calico-789448": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:12:51.390168  336891 config.go:182] Loaded profile config "custom-flannel-789448": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:12:51.390333  336891 config.go:182] Loaded profile config "embed-certs-399565": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:12:51.390482  336891 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:12:51.415073  336891 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1212 20:12:51.415183  336891 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:12:51.471467  336891 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:54 OomKillDisable:false NGoroutines:70 SystemTime:2025-12-12 20:12:51.460942874 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 20:12:51.471604  336891 docker.go:319] overlay module found
	I1212 20:12:51.533907  336891 out.go:179] * Using the docker driver based on user configuration
	I1212 20:12:51.567303  336891 start.go:309] selected driver: docker
	I1212 20:12:51.567327  336891 start.go:927] validating driver "docker" against <nil>
	I1212 20:12:51.567343  336891 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:12:51.568178  336891 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:12:51.655078  336891 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:54 OomKillDisable:false NGoroutines:70 SystemTime:2025-12-12 20:12:51.644484424 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 20:12:51.655237  336891 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	E1212 20:12:51.655454  336891 start_flags.go:481] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1212 20:12:51.655493  336891 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 20:12:51.744440  336891 out.go:179] * Using Docker driver with root privileges
	I1212 20:12:51.883950  336891 cni.go:84] Creating CNI manager for "bridge"
	I1212 20:12:51.883978  336891 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1212 20:12:51.884083  336891 start.go:353] cluster config:
	{Name:enable-default-cni-789448 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-789448 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:12:52.001160  336891 out.go:179] * Starting "enable-default-cni-789448" primary control-plane node in "enable-default-cni-789448" cluster
	I1212 20:12:52.022457  336891 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 20:12:52.052181  336891 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 20:12:52.054198  336891 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 20:12:52.054225  336891 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:12:52.054265  336891 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1212 20:12:52.054304  336891 cache.go:65] Caching tarball of preloaded images
	I1212 20:12:52.054407  336891 preload.go:238] Found /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 20:12:52.054423  336891 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 20:12:52.054557  336891 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/enable-default-cni-789448/config.json ...
	I1212 20:12:52.054580  336891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/enable-default-cni-789448/config.json: {Name:mk9af81fd3c366863103700101a946fbb98b8a7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:12:52.078659  336891 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 20:12:52.078686  336891 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 20:12:52.078705  336891 cache.go:243] Successfully downloaded all kic artifacts
	I1212 20:12:52.078742  336891 start.go:360] acquireMachinesLock for enable-default-cni-789448: {Name:mk7568efdf4bfb1424aecc843b664068d92fdce8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:12:52.078844  336891 start.go:364] duration metric: took 82.123µs to acquireMachinesLock for "enable-default-cni-789448"
	I1212 20:12:52.078878  336891 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-789448 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-789448 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:12:52.078984  336891 start.go:125] createHost starting for "" (driver="docker")
	I1212 20:12:48.442428  335976 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1212 20:12:48.442665  335976 start.go:159] libmachine.API.Create for "custom-flannel-789448" (driver="docker")
	I1212 20:12:48.442698  335976 client.go:173] LocalClient.Create starting
	I1212 20:12:48.442767  335976 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem
	I1212 20:12:48.442803  335976 main.go:143] libmachine: Decoding PEM data...
	I1212 20:12:48.442824  335976 main.go:143] libmachine: Parsing certificate...
	I1212 20:12:48.442901  335976 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem
	I1212 20:12:48.442929  335976 main.go:143] libmachine: Decoding PEM data...
	I1212 20:12:48.442952  335976 main.go:143] libmachine: Parsing certificate...
	I1212 20:12:48.443428  335976 cli_runner.go:164] Run: docker network inspect custom-flannel-789448 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 20:12:48.459456  335976 cli_runner.go:211] docker network inspect custom-flannel-789448 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 20:12:48.459526  335976 network_create.go:284] running [docker network inspect custom-flannel-789448] to gather additional debugging logs...
	I1212 20:12:48.459542  335976 cli_runner.go:164] Run: docker network inspect custom-flannel-789448
	W1212 20:12:48.476424  335976 cli_runner.go:211] docker network inspect custom-flannel-789448 returned with exit code 1
	I1212 20:12:48.476451  335976 network_create.go:287] error running [docker network inspect custom-flannel-789448]: docker network inspect custom-flannel-789448: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network custom-flannel-789448 not found
	I1212 20:12:48.476489  335976 network_create.go:289] output of [docker network inspect custom-flannel-789448]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network custom-flannel-789448 not found
	
	** /stderr **
	I1212 20:12:48.476608  335976 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:12:48.495070  335976 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-74442dadd84e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:ff:80:da:a9:72} reservation:<nil>}
	I1212 20:12:48.495711  335976 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-26148288ab51 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:82:49:cc:21:29:a7} reservation:<nil>}
	I1212 20:12:48.496386  335976 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3684d3b926aa IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2a:5e:c7:18:99:d2} reservation:<nil>}
	I1212 20:12:48.496860  335976 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c165baeec493 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ea:78:86:50:3b:d1} reservation:<nil>}
	I1212 20:12:48.497472  335976 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-8b25279d9256 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:fe:09:dd:81:85:14} reservation:<nil>}
	I1212 20:12:48.497991  335976 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-6c29c7e79781 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:c6:0e:96:4d:9c:d8} reservation:<nil>}
	I1212 20:12:48.498824  335976 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f33340}
	I1212 20:12:48.498850  335976 network_create.go:124] attempt to create docker network custom-flannel-789448 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1212 20:12:48.498888  335976 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-789448 custom-flannel-789448
	I1212 20:12:48.546232  335976 network_create.go:108] docker network custom-flannel-789448 192.168.103.0/24 created
	I1212 20:12:48.546258  335976 kic.go:121] calculated static IP "192.168.103.2" for the "custom-flannel-789448" container
	I1212 20:12:48.546363  335976 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 20:12:48.563357  335976 cli_runner.go:164] Run: docker volume create custom-flannel-789448 --label name.minikube.sigs.k8s.io=custom-flannel-789448 --label created_by.minikube.sigs.k8s.io=true
	I1212 20:12:48.580370  335976 oci.go:103] Successfully created a docker volume custom-flannel-789448
	I1212 20:12:48.580442  335976 cli_runner.go:164] Run: docker run --rm --name custom-flannel-789448-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-789448 --entrypoint /usr/bin/test -v custom-flannel-789448:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -d /var/lib
	I1212 20:12:49.106760  335976 oci.go:107] Successfully prepared a docker volume custom-flannel-789448
	I1212 20:12:49.106822  335976 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:12:49.106831  335976 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 20:12:49.106887  335976 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-789448:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 20:12:52.402928  335976 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-789448:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -I lz4 -xf /preloaded.tar -C /extractDir: (3.296003637s)
	I1212 20:12:52.402957  335976 kic.go:203] duration metric: took 3.296122938s to extract preloaded images to volume ...
	W1212 20:12:52.403034  335976 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1212 20:12:52.403062  335976 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1212 20:12:52.403095  335976 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 20:12:52.480350  335976 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-flannel-789448 --name custom-flannel-789448 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-789448 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-flannel-789448 --network custom-flannel-789448 --ip 192.168.103.2 --volume custom-flannel-789448:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138
	I1212 20:12:52.784772  335976 cli_runner.go:164] Run: docker container inspect custom-flannel-789448 --format={{.State.Running}}
	I1212 20:12:52.804897  335976 cli_runner.go:164] Run: docker container inspect custom-flannel-789448 --format={{.State.Status}}
	I1212 20:12:52.825396  335976 cli_runner.go:164] Run: docker exec custom-flannel-789448 stat /var/lib/dpkg/alternatives/iptables
	I1212 20:12:52.871228  335976 oci.go:144] the created container "custom-flannel-789448" has a running status.
	I1212 20:12:52.871260  335976 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22112-5703/.minikube/machines/custom-flannel-789448/id_rsa...
	I1212 20:12:53.144603  335976 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22112-5703/.minikube/machines/custom-flannel-789448/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 20:12:53.175712  335976 cli_runner.go:164] Run: docker container inspect custom-flannel-789448 --format={{.State.Status}}
	I1212 20:12:53.192910  335976 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 20:12:53.192933  335976 kic_runner.go:114] Args: [docker exec --privileged custom-flannel-789448 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 20:12:53.242056  335976 cli_runner.go:164] Run: docker container inspect custom-flannel-789448 --format={{.State.Status}}
	I1212 20:12:49.409825  325830 system_pods.go:86] 9 kube-system pods found
	I1212 20:12:49.409866  325830 system_pods.go:89] "calico-kube-controllers-5c676f698c-qdgmz" [c3d8ac3e-005c-455c-ac2e-9c6dfa1530f8] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1212 20:12:49.409878  325830 system_pods.go:89] "calico-node-hd58q" [b41b3fa4-ac0d-4f46-9ac4-c0795974470f] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1212 20:12:49.409889  325830 system_pods.go:89] "coredns-66bc5c9577-dd2kh" [c32974a0-1ca5-4be4-8cf3-9675fe0cf798] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 20:12:49.409895  325830 system_pods.go:89] "etcd-calico-789448" [5de1a358-4540-4574-a1a3-b3855050644c] Running
	I1212 20:12:49.409901  325830 system_pods.go:89] "kube-apiserver-calico-789448" [fc139bf1-f9df-41ac-b6a7-b276ad25c967] Running
	I1212 20:12:49.409907  325830 system_pods.go:89] "kube-controller-manager-calico-789448" [e7069273-5492-418b-934f-dd1db739703f] Running
	I1212 20:12:49.409921  325830 system_pods.go:89] "kube-proxy-7xrs6" [c9397fe0-32dc-4e30-8c91-50e2b674114e] Running
	I1212 20:12:49.409927  325830 system_pods.go:89] "kube-scheduler-calico-789448" [38bb66c5-31b4-48c4-85db-8a575dcfc930] Running
	I1212 20:12:49.409933  325830 system_pods.go:89] "storage-provisioner" [eda19077-6582-4fe8-93fe-a84f33e5d168] Running
	I1212 20:12:49.409952  325830 retry.go:31] will retry after 2.235239313s: missing components: kube-dns
	I1212 20:12:51.651471  325830 system_pods.go:86] 9 kube-system pods found
	I1212 20:12:51.651510  325830 system_pods.go:89] "calico-kube-controllers-5c676f698c-qdgmz" [c3d8ac3e-005c-455c-ac2e-9c6dfa1530f8] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1212 20:12:51.651521  325830 system_pods.go:89] "calico-node-hd58q" [b41b3fa4-ac0d-4f46-9ac4-c0795974470f] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1212 20:12:51.651532  325830 system_pods.go:89] "coredns-66bc5c9577-dd2kh" [c32974a0-1ca5-4be4-8cf3-9675fe0cf798] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 20:12:51.651537  325830 system_pods.go:89] "etcd-calico-789448" [5de1a358-4540-4574-a1a3-b3855050644c] Running
	I1212 20:12:51.651547  325830 system_pods.go:89] "kube-apiserver-calico-789448" [fc139bf1-f9df-41ac-b6a7-b276ad25c967] Running
	I1212 20:12:51.651552  325830 system_pods.go:89] "kube-controller-manager-calico-789448" [e7069273-5492-418b-934f-dd1db739703f] Running
	I1212 20:12:51.651557  325830 system_pods.go:89] "kube-proxy-7xrs6" [c9397fe0-32dc-4e30-8c91-50e2b674114e] Running
	I1212 20:12:51.651564  325830 system_pods.go:89] "kube-scheduler-calico-789448" [38bb66c5-31b4-48c4-85db-8a575dcfc930] Running
	I1212 20:12:51.651569  325830 system_pods.go:89] "storage-provisioner" [eda19077-6582-4fe8-93fe-a84f33e5d168] Running
	I1212 20:12:51.651585  325830 retry.go:31] will retry after 3.463747764s: missing components: kube-dns
	I1212 20:12:52.085625  336891 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1212 20:12:52.086010  336891 start.go:159] libmachine.API.Create for "enable-default-cni-789448" (driver="docker")
	I1212 20:12:52.086048  336891 client.go:173] LocalClient.Create starting
	I1212 20:12:52.086137  336891 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22112-5703/.minikube/certs/ca.pem
	I1212 20:12:52.086173  336891 main.go:143] libmachine: Decoding PEM data...
	I1212 20:12:52.086199  336891 main.go:143] libmachine: Parsing certificate...
	I1212 20:12:52.086289  336891 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22112-5703/.minikube/certs/cert.pem
	I1212 20:12:52.086320  336891 main.go:143] libmachine: Decoding PEM data...
	I1212 20:12:52.086339  336891 main.go:143] libmachine: Parsing certificate...
	I1212 20:12:52.086769  336891 cli_runner.go:164] Run: docker network inspect enable-default-cni-789448 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 20:12:52.105146  336891 cli_runner.go:211] docker network inspect enable-default-cni-789448 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 20:12:52.105241  336891 network_create.go:284] running [docker network inspect enable-default-cni-789448] to gather additional debugging logs...
	I1212 20:12:52.105299  336891 cli_runner.go:164] Run: docker network inspect enable-default-cni-789448
	W1212 20:12:52.121442  336891 cli_runner.go:211] docker network inspect enable-default-cni-789448 returned with exit code 1
	I1212 20:12:52.121470  336891 network_create.go:287] error running [docker network inspect enable-default-cni-789448]: docker network inspect enable-default-cni-789448: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network enable-default-cni-789448 not found
	I1212 20:12:52.121485  336891 network_create.go:289] output of [docker network inspect enable-default-cni-789448]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network enable-default-cni-789448 not found
	
	** /stderr **
	I1212 20:12:52.121598  336891 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:12:52.145105  336891 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-74442dadd84e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:ff:80:da:a9:72} reservation:<nil>}
	I1212 20:12:52.146034  336891 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-26148288ab51 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:82:49:cc:21:29:a7} reservation:<nil>}
	I1212 20:12:52.147112  336891 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3684d3b926aa IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2a:5e:c7:18:99:d2} reservation:<nil>}
	I1212 20:12:52.148396  336891 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eaef60}
	I1212 20:12:52.148437  336891 network_create.go:124] attempt to create docker network enable-default-cni-789448 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1212 20:12:52.148530  336891 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-789448 enable-default-cni-789448
	I1212 20:12:52.255907  336891 network_create.go:108] docker network enable-default-cni-789448 192.168.76.0/24 created
	I1212 20:12:52.255937  336891 kic.go:121] calculated static IP "192.168.76.2" for the "enable-default-cni-789448" container
	I1212 20:12:52.256011  336891 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 20:12:52.273661  336891 cli_runner.go:164] Run: docker volume create enable-default-cni-789448 --label name.minikube.sigs.k8s.io=enable-default-cni-789448 --label created_by.minikube.sigs.k8s.io=true
	I1212 20:12:52.385749  336891 oci.go:103] Successfully created a docker volume enable-default-cni-789448
	I1212 20:12:52.385833  336891 cli_runner.go:164] Run: docker run --rm --name enable-default-cni-789448-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-789448 --entrypoint /usr/bin/test -v enable-default-cni-789448:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -d /var/lib
	I1212 20:12:52.898766  336891 oci.go:107] Successfully prepared a docker volume enable-default-cni-789448
	I1212 20:12:52.898850  336891 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:12:52.898869  336891 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 20:12:52.898943  336891 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-789448:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 20:12:56.145796  336891 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-789448:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -I lz4 -xf /preloaded.tar -C /extractDir: (3.246776579s)
	I1212 20:12:56.145855  336891 kic.go:203] duration metric: took 3.246971803s to extract preloaded images to volume ...
	W1212 20:12:56.146060  336891 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1212 20:12:56.146104  336891 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1212 20:12:56.146156  336891 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 20:12:56.206077  336891 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname enable-default-cni-789448 --name enable-default-cni-789448 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-789448 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=enable-default-cni-789448 --network enable-default-cni-789448 --ip 192.168.76.2 --volume enable-default-cni-789448:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138
	
	
	==> CRI-O <==
	Dec 12 20:12:29 embed-certs-399565 crio[558]: time="2025-12-12T20:12:29.168194048Z" level=info msg="Started container" PID=1763 containerID=1c049a6cbb92a22cb990bedef4160dcca97ade070c72be866762422db46af0ee description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zdjv/dashboard-metrics-scraper id=0ee9dab5-e239-49e6-b80b-4ad04b68e475 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6768f58e387b134d99847ce544a33bb5b8de103a78c4f499a06c59a0b2629744
	Dec 12 20:12:29 embed-certs-399565 crio[558]: time="2025-12-12T20:12:29.205804647Z" level=info msg="Removing container: 3e03d3a5fc0cd47838c3636121206d85d238ade4100760dd8db8e3d4a6026715" id=65c5d21e-344c-42be-90ce-0659d138af82 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 20:12:29 embed-certs-399565 crio[558]: time="2025-12-12T20:12:29.227094189Z" level=info msg="Removed container 3e03d3a5fc0cd47838c3636121206d85d238ade4100760dd8db8e3d4a6026715: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zdjv/dashboard-metrics-scraper" id=65c5d21e-344c-42be-90ce-0659d138af82 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 20:12:36 embed-certs-399565 crio[558]: time="2025-12-12T20:12:36.22473963Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f8caff44-ef12-4545-bcfd-a2233955ceb7 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:12:36 embed-certs-399565 crio[558]: time="2025-12-12T20:12:36.225746445Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a3d8ffc4-853c-4763-98d6-fa8558c2dc80 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:12:36 embed-certs-399565 crio[558]: time="2025-12-12T20:12:36.227013842Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=bd29cc24-1e3e-48d7-bc6f-69a7fa5cac2d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:12:36 embed-certs-399565 crio[558]: time="2025-12-12T20:12:36.227137258Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:12:36 embed-certs-399565 crio[558]: time="2025-12-12T20:12:36.231674933Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:12:36 embed-certs-399565 crio[558]: time="2025-12-12T20:12:36.231853714Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/8f7a4bec6796f77ab0b33ef07faa4cabdaae05b37118a814c108c8049102b004/merged/etc/passwd: no such file or directory"
	Dec 12 20:12:36 embed-certs-399565 crio[558]: time="2025-12-12T20:12:36.231882589Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/8f7a4bec6796f77ab0b33ef07faa4cabdaae05b37118a814c108c8049102b004/merged/etc/group: no such file or directory"
	Dec 12 20:12:36 embed-certs-399565 crio[558]: time="2025-12-12T20:12:36.232189663Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:12:36 embed-certs-399565 crio[558]: time="2025-12-12T20:12:36.264147741Z" level=info msg="Created container e860f29f72e6009ea5335c8f0c2d97f95e18316d247d5e91ebfd2df64073c48c: kube-system/storage-provisioner/storage-provisioner" id=bd29cc24-1e3e-48d7-bc6f-69a7fa5cac2d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:12:36 embed-certs-399565 crio[558]: time="2025-12-12T20:12:36.265026205Z" level=info msg="Starting container: e860f29f72e6009ea5335c8f0c2d97f95e18316d247d5e91ebfd2df64073c48c" id=d987759b-6f99-40de-8936-658eecaa4d78 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 20:12:36 embed-certs-399565 crio[558]: time="2025-12-12T20:12:36.267556116Z" level=info msg="Started container" PID=1777 containerID=e860f29f72e6009ea5335c8f0c2d97f95e18316d247d5e91ebfd2df64073c48c description=kube-system/storage-provisioner/storage-provisioner id=d987759b-6f99-40de-8936-658eecaa4d78 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f90b800765f1bc01c54708de39ff9dd1ac4d3c7d95826bb026b4012ffae58461
	Dec 12 20:12:51 embed-certs-399565 crio[558]: time="2025-12-12T20:12:51.100016616Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b72bee4b-2e78-468f-95b3-83180ddb1d78 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:12:51 embed-certs-399565 crio[558]: time="2025-12-12T20:12:51.101126019Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=6cdf8fa4-8de2-4452-97c6-60435d76bc1d name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:12:51 embed-certs-399565 crio[558]: time="2025-12-12T20:12:51.102396802Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zdjv/dashboard-metrics-scraper" id=425c7a8e-5c68-426a-ac80-b809d74803c3 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:12:51 embed-certs-399565 crio[558]: time="2025-12-12T20:12:51.102536772Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:12:51 embed-certs-399565 crio[558]: time="2025-12-12T20:12:51.108697948Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:12:51 embed-certs-399565 crio[558]: time="2025-12-12T20:12:51.109133452Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:12:51 embed-certs-399565 crio[558]: time="2025-12-12T20:12:51.144799021Z" level=info msg="Created container a02bc8eea52d1f29eb5e3c408bb339f74803db1963fb930969f4a47fe8df68b7: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zdjv/dashboard-metrics-scraper" id=425c7a8e-5c68-426a-ac80-b809d74803c3 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:12:51 embed-certs-399565 crio[558]: time="2025-12-12T20:12:51.14546538Z" level=info msg="Starting container: a02bc8eea52d1f29eb5e3c408bb339f74803db1963fb930969f4a47fe8df68b7" id=b1d74369-1c9f-469f-a704-941c148f3c5d name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 20:12:51 embed-certs-399565 crio[558]: time="2025-12-12T20:12:51.147519393Z" level=info msg="Started container" PID=1811 containerID=a02bc8eea52d1f29eb5e3c408bb339f74803db1963fb930969f4a47fe8df68b7 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zdjv/dashboard-metrics-scraper id=b1d74369-1c9f-469f-a704-941c148f3c5d name=/runtime.v1.RuntimeService/StartContainer sandboxID=6768f58e387b134d99847ce544a33bb5b8de103a78c4f499a06c59a0b2629744
	Dec 12 20:12:51 embed-certs-399565 crio[558]: time="2025-12-12T20:12:51.27276927Z" level=info msg="Removing container: 1c049a6cbb92a22cb990bedef4160dcca97ade070c72be866762422db46af0ee" id=8ea54dfa-f0f3-4fb8-a9aa-0b9fd6e7f048 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 20:12:51 embed-certs-399565 crio[558]: time="2025-12-12T20:12:51.290151997Z" level=info msg="Removed container 1c049a6cbb92a22cb990bedef4160dcca97ade070c72be866762422db46af0ee: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zdjv/dashboard-metrics-scraper" id=8ea54dfa-f0f3-4fb8-a9aa-0b9fd6e7f048 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	a02bc8eea52d1       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           7 seconds ago       Exited              dashboard-metrics-scraper   3                   6768f58e387b1       dashboard-metrics-scraper-6ffb444bf9-8zdjv   kubernetes-dashboard
	e860f29f72e60       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         1                   f90b800765f1b       storage-provisioner                          kube-system
	cc941461156ed       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   42 seconds ago      Running             kubernetes-dashboard        0                   e1b0af75b8227       kubernetes-dashboard-855c9754f9-hwvvn        kubernetes-dashboard
	44c3d08146af4       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           52 seconds ago      Running             coredns                     0                   6111d79a1cb8b       coredns-66bc5c9577-zg2v9                     kube-system
	7be83d0db46c3       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   5bb835dcb0967       busybox                                      default
	9392cc35dd05a       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           52 seconds ago      Running             kube-proxy                  0                   1b05731117e62       kube-proxy-xgs9b                             kube-system
	db7cce6e798bc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   f90b800765f1b       storage-provisioner                          kube-system
	e41bd589f9d6e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   dc3a788c5b7cc       kindnet-5fbmr                                kube-system
	33c3adae59f67       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           55 seconds ago      Running             etcd                        0                   b19f9e833eb5b       etcd-embed-certs-399565                      kube-system
	71b86d1be5120       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           55 seconds ago      Running             kube-scheduler              0                   e3ff004f31075       kube-scheduler-embed-certs-399565            kube-system
	cb3e0992ee840       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           55 seconds ago      Running             kube-apiserver              0                   85d78661ad430       kube-apiserver-embed-certs-399565            kube-system
	19e9c49dfeed2       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           55 seconds ago      Running             kube-controller-manager     0                   e7ed395481671       kube-controller-manager-embed-certs-399565   kube-system
	
	
	==> coredns [44c3d08146af4a11949a7d4c4e0983875afd2b7ddd7a3408190d4ac8748b9d41] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37301 - 61963 "HINFO IN 4080172600938501915.6452157330515326122. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.109073884s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-399565
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-399565
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300
	                    minikube.k8s.io/name=embed-certs-399565
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T20_11_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 20:11:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-399565
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 20:12:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 20:12:35 +0000   Fri, 12 Dec 2025 20:11:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 20:12:35 +0000   Fri, 12 Dec 2025 20:11:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 20:12:35 +0000   Fri, 12 Dec 2025 20:11:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 20:12:35 +0000   Fri, 12 Dec 2025 20:11:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-399565
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c9831a2813dcaeb3cc17b596693b7bac
	  System UUID:                d4ee55d6-eeec-48fd-851e-1386ebc672fc
	  Boot ID:                    50e046d9-33b0-4107-918f-f0a8d9513b10
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-zg2v9                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-embed-certs-399565                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         111s
	  kube-system                 kindnet-5fbmr                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-embed-certs-399565             250m (3%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-embed-certs-399565    200m (2%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-xgs9b                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-embed-certs-399565             100m (1%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-8zdjv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-hwvvn         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 104s               kube-proxy       
	  Normal  Starting                 52s                kube-proxy       
	  Normal  NodeHasSufficientMemory  111s               kubelet          Node embed-certs-399565 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    111s               kubelet          Node embed-certs-399565 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     111s               kubelet          Node embed-certs-399565 status is now: NodeHasSufficientPID
	  Normal  Starting                 111s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           107s               node-controller  Node embed-certs-399565 event: Registered Node embed-certs-399565 in Controller
	  Normal  NodeReady                94s                kubelet          Node embed-certs-399565 status is now: NodeReady
	  Normal  Starting                 56s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x8 over 56s)  kubelet          Node embed-certs-399565 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 56s)  kubelet          Node embed-certs-399565 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 56s)  kubelet          Node embed-certs-399565 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           51s                node-controller  Node embed-certs-399565 event: Registered Node embed-certs-399565 in Controller
	
	
	==> dmesg <==
	[  +0.086327] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026088] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.855140] kauditd_printk_skb: 47 callbacks suppressed
	[Dec12 19:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.016395] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023890] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023861] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023912] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +1.023894] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +2.047754] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +4.031589] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[  +8.448131] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[Dec12 19:32] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	[ +32.252714] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1a 48 cc 76 52 b0 d2 10 f2 a2 d8 6a 08 00
	
	
	==> etcd [33c3adae59f67985263e48c4dcbeb792ce1fb117cf8d4ff5efb24caa08cbb03d] <==
	{"level":"warn","ts":"2025-12-12T20:12:03.991684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:12:03.998810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:12:04.005382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:12:04.012897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:12:04.028942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:12:04.035808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:12:04.042117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:12:04.098028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39870","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-12T20:12:12.269651Z","caller":"traceutil/trace.go:172","msg":"trace[414632648] transaction","detail":"{read_only:false; response_revision:577; number_of_response:1; }","duration":"103.257689ms","start":"2025-12-12T20:12:12.166374Z","end":"2025-12-12T20:12:12.269632Z","steps":["trace[414632648] 'process raft request'  (duration: 81.276257ms)","trace[414632648] 'compare'  (duration: 21.882809ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-12T20:12:12.641226Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"178.207144ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766792289575087 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-399565\" mod_revision:575 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-399565\" value_size:7926 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-399565\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-12T20:12:12.641406Z","caller":"traceutil/trace.go:172","msg":"trace[2072466225] transaction","detail":"{read_only:false; response_revision:578; number_of_response:1; }","duration":"360.882281ms","start":"2025-12-12T20:12:12.280481Z","end":"2025-12-12T20:12:12.641363Z","steps":["trace[2072466225] 'process raft request'  (duration: 182.064815ms)","trace[2072466225] 'compare'  (duration: 178.130424ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-12T20:12:12.641512Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-12T20:12:12.280464Z","time spent":"361.000228ms","remote":"127.0.0.1:39074","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":7994,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-399565\" mod_revision:575 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-399565\" value_size:7926 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-399565\" > >"}
	{"level":"info","ts":"2025-12-12T20:12:13.174710Z","caller":"traceutil/trace.go:172","msg":"trace[569438513] transaction","detail":"{read_only:false; response_revision:580; number_of_response:1; }","duration":"118.956074ms","start":"2025-12-12T20:12:13.055737Z","end":"2025-12-12T20:12:13.174693Z","steps":["trace[569438513] 'process raft request'  (duration: 118.759588ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T20:12:13.314047Z","caller":"traceutil/trace.go:172","msg":"trace[1106449450] linearizableReadLoop","detail":"{readStateIndex:609; appliedIndex:609; }","duration":"137.163446ms","start":"2025-12-12T20:12:13.176857Z","end":"2025-12-12T20:12:13.314021Z","steps":["trace[1106449450] 'read index received'  (duration: 137.156841ms)","trace[1106449450] 'applied index is now lower than readState.Index'  (duration: 5.626µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-12T20:12:13.437504Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"260.591892ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-12T20:12:13.437604Z","caller":"traceutil/trace.go:172","msg":"trace[633390736] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:580; }","duration":"260.726514ms","start":"2025-12-12T20:12:13.176853Z","end":"2025-12-12T20:12:13.437580Z","steps":["trace[633390736] 'agreement among raft nodes before linearized reading'  (duration: 137.258708ms)","trace[633390736] 'range keys from in-memory index tree'  (duration: 123.302841ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-12T20:12:13.437701Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.530926ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766792289575099 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zdjv.188090dcda47fefe\" mod_revision:576 > success:<request_put:<key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zdjv.188090dcda47fefe\" value_size:875 lease:6571766792289574887 >> failure:<request_range:<key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zdjv.188090dcda47fefe\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-12T20:12:13.437778Z","caller":"traceutil/trace.go:172","msg":"trace[703351354] linearizableReadLoop","detail":"{readStateIndex:610; appliedIndex:609; }","duration":"123.646238ms","start":"2025-12-12T20:12:13.314121Z","end":"2025-12-12T20:12:13.437767Z","steps":["trace[703351354] 'read index received'  (duration: 28.066µs)","trace[703351354] 'applied index is now lower than readState.Index'  (duration: 123.617145ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-12T20:12:13.437799Z","caller":"traceutil/trace.go:172","msg":"trace[938507542] transaction","detail":"{read_only:false; response_revision:581; number_of_response:1; }","duration":"261.221326ms","start":"2025-12-12T20:12:13.176554Z","end":"2025-12-12T20:12:13.437775Z","steps":["trace[938507542] 'process raft request'  (duration: 137.549339ms)","trace[938507542] 'compare'  (duration: 123.374738ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-12T20:12:13.437910Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"259.872713ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zdjv\" limit:1 ","response":"range_response_count:1 size:4721"}
	{"level":"warn","ts":"2025-12-12T20:12:13.437966Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"259.391319ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-399565\" limit:1 ","response":"range_response_count:1 size:5708"}
	{"level":"info","ts":"2025-12-12T20:12:13.438008Z","caller":"traceutil/trace.go:172","msg":"trace[1102066518] range","detail":"{range_begin:/registry/minions/embed-certs-399565; range_end:; response_count:1; response_revision:581; }","duration":"259.438187ms","start":"2025-12-12T20:12:13.178561Z","end":"2025-12-12T20:12:13.437999Z","steps":["trace[1102066518] 'agreement among raft nodes before linearized reading'  (duration: 259.303932ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T20:12:13.437986Z","caller":"traceutil/trace.go:172","msg":"trace[489115295] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zdjv; range_end:; response_count:1; response_revision:581; }","duration":"259.922261ms","start":"2025-12-12T20:12:13.178010Z","end":"2025-12-12T20:12:13.437932Z","steps":["trace[489115295] 'agreement among raft nodes before linearized reading'  (duration: 259.788263ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T20:12:13.598676Z","caller":"traceutil/trace.go:172","msg":"trace[1946495577] transaction","detail":"{read_only:false; response_revision:582; number_of_response:1; }","duration":"153.717879ms","start":"2025-12-12T20:12:13.444939Z","end":"2025-12-12T20:12:13.598657Z","steps":["trace[1946495577] 'process raft request'  (duration: 122.640464ms)","trace[1946495577] 'compare'  (duration: 30.983666ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-12T20:12:45.872714Z","caller":"traceutil/trace.go:172","msg":"trace[2116460126] transaction","detail":"{read_only:false; response_revision:629; number_of_response:1; }","duration":"114.981875ms","start":"2025-12-12T20:12:45.757707Z","end":"2025-12-12T20:12:45.872689Z","steps":["trace[2116460126] 'process raft request'  (duration: 47.561149ms)","trace[2116460126] 'compare'  (duration: 67.316754ms)"],"step_count":2}
	
	
	==> kernel <==
	 20:12:58 up 55 min,  0 user,  load average: 5.26, 3.29, 2.04
	Linux embed-certs-399565 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e41bd589f9d6e6b003b2c73ebdd9a095cb6e17b960d1b4da2b23c408cf0cb8ab] <==
	I1212 20:12:05.670705       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1212 20:12:05.670961       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1212 20:12:05.671124       1 main.go:148] setting mtu 1500 for CNI 
	I1212 20:12:05.671157       1 main.go:178] kindnetd IP family: "ipv4"
	I1212 20:12:05.671193       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-12T20:12:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1212 20:12:05.870265       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1212 20:12:05.870370       1 controller.go:381] "Waiting for informer caches to sync"
	I1212 20:12:05.870390       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1212 20:12:05.933724       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1212 20:12:06.433684       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1212 20:12:06.433711       1 metrics.go:72] Registering metrics
	I1212 20:12:06.433783       1 controller.go:711] "Syncing nftables rules"
	I1212 20:12:15.870658       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1212 20:12:15.870725       1 main.go:301] handling current node
	I1212 20:12:25.872378       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1212 20:12:25.872407       1 main.go:301] handling current node
	I1212 20:12:35.870846       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1212 20:12:35.870890       1 main.go:301] handling current node
	I1212 20:12:45.876346       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1212 20:12:45.876400       1 main.go:301] handling current node
	I1212 20:12:55.873393       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1212 20:12:55.873429       1 main.go:301] handling current node
	
	
	==> kube-apiserver [cb3e0992ee8404f1e2603f20c229f9311a1e0d6209d65cd7c650f29bc80627f2] <==
	I1212 20:12:04.600601       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1212 20:12:04.600639       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1212 20:12:04.599345       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1212 20:12:04.599363       1 aggregator.go:171] initial CRD sync complete...
	I1212 20:12:04.601043       1 autoregister_controller.go:144] Starting autoregister controller
	I1212 20:12:04.599379       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1212 20:12:04.601101       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1212 20:12:04.601148       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 20:12:04.601171       1 cache.go:39] Caches are synced for autoregister controller
	E1212 20:12:04.605390       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1212 20:12:04.606682       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1212 20:12:04.650056       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 20:12:04.657260       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 20:12:04.895487       1 controller.go:667] quota admission added evaluator for: namespaces
	I1212 20:12:04.923178       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1212 20:12:04.945511       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 20:12:04.951338       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 20:12:04.958194       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1212 20:12:04.990892       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.241.9"}
	I1212 20:12:04.998905       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.170.12"}
	I1212 20:12:05.503481       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 20:12:08.328836       1 controller.go:667] quota admission added evaluator for: endpoints
	I1212 20:12:08.328895       1 controller.go:667] quota admission added evaluator for: endpoints
	I1212 20:12:08.379661       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1212 20:12:08.529126       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [19e9c49dfeed28b153429669e7b559f50cc7919c3030acc9d2ad133418bc6615] <==
	I1212 20:12:07.781447       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 20:12:07.782519       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1212 20:12:07.783656       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1212 20:12:07.785882       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1212 20:12:07.788016       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1212 20:12:07.788098       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1212 20:12:07.788167       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-399565"
	I1212 20:12:07.788218       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1212 20:12:07.791347       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1212 20:12:07.793607       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1212 20:12:07.794820       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1212 20:12:07.806166       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1212 20:12:07.826116       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1212 20:12:07.826141       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1212 20:12:07.826163       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1212 20:12:07.826258       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1212 20:12:07.826305       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1212 20:12:07.826261       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1212 20:12:07.831517       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 20:12:07.831540       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1212 20:12:07.942415       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1212 20:12:08.025983       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1212 20:12:08.026009       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1212 20:12:08.026017       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1212 20:12:08.042906       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [9392cc35dd05aa59c29ff54e9dcf13b8a7dbd9d5eb3b57f5998b857dc3679304] <==
	I1212 20:12:05.492213       1 server_linux.go:53] "Using iptables proxy"
	I1212 20:12:05.554621       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1212 20:12:05.655723       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1212 20:12:05.655777       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1212 20:12:05.655869       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 20:12:05.677814       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 20:12:05.677870       1 server_linux.go:132] "Using iptables Proxier"
	I1212 20:12:05.683913       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 20:12:05.684364       1 server.go:527] "Version info" version="v1.34.2"
	I1212 20:12:05.684444       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 20:12:05.686318       1 config.go:200] "Starting service config controller"
	I1212 20:12:05.686335       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 20:12:05.686375       1 config.go:106] "Starting endpoint slice config controller"
	I1212 20:12:05.686382       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 20:12:05.686394       1 config.go:309] "Starting node config controller"
	I1212 20:12:05.686401       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 20:12:05.686407       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 20:12:05.686401       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 20:12:05.686587       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 20:12:05.787292       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1212 20:12:05.787318       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1212 20:12:05.787292       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [71b86d1be5120eb4253f8e9ab45b12d91a5b1989d2e35061f4250da25598d54b] <==
	I1212 20:12:03.375236       1 serving.go:386] Generated self-signed cert in-memory
	W1212 20:12:04.524816       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1212 20:12:04.524852       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 20:12:04.524866       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1212 20:12:04.524876       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1212 20:12:04.570188       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1212 20:12:04.570226       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 20:12:04.572989       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 20:12:04.573034       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 20:12:04.573458       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1212 20:12:04.573531       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1212 20:12:04.673928       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 12 20:12:08 embed-certs-399565 kubelet[722]: I1212 20:12:08.577881     722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/764cbf67-466b-495a-a5d8-bf8234eb5da2-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-hwvvn\" (UID: \"764cbf67-466b-495a-a5d8-bf8234eb5da2\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hwvvn"
	Dec 12 20:12:11 embed-certs-399565 kubelet[722]: I1212 20:12:11.153622     722 scope.go:117] "RemoveContainer" containerID="485caaea81b92fab60cd86ab36159709aef0f8e99eea6b0bea5849fd11e07b1e"
	Dec 12 20:12:12 embed-certs-399565 kubelet[722]: I1212 20:12:12.158332     722 scope.go:117] "RemoveContainer" containerID="485caaea81b92fab60cd86ab36159709aef0f8e99eea6b0bea5849fd11e07b1e"
	Dec 12 20:12:12 embed-certs-399565 kubelet[722]: I1212 20:12:12.158452     722 scope.go:117] "RemoveContainer" containerID="3e03d3a5fc0cd47838c3636121206d85d238ade4100760dd8db8e3d4a6026715"
	Dec 12 20:12:12 embed-certs-399565 kubelet[722]: E1212 20:12:12.158641     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8zdjv_kubernetes-dashboard(3a076791-6f20-4953-8a2f-940ff88fc5af)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zdjv" podUID="3a076791-6f20-4953-8a2f-940ff88fc5af"
	Dec 12 20:12:13 embed-certs-399565 kubelet[722]: I1212 20:12:13.163222     722 scope.go:117] "RemoveContainer" containerID="3e03d3a5fc0cd47838c3636121206d85d238ade4100760dd8db8e3d4a6026715"
	Dec 12 20:12:13 embed-certs-399565 kubelet[722]: E1212 20:12:13.163441     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8zdjv_kubernetes-dashboard(3a076791-6f20-4953-8a2f-940ff88fc5af)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zdjv" podUID="3a076791-6f20-4953-8a2f-940ff88fc5af"
	Dec 12 20:12:16 embed-certs-399565 kubelet[722]: I1212 20:12:16.212800     722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hwvvn" podStartSLOduration=1.572122381 podStartE2EDuration="8.212775303s" podCreationTimestamp="2025-12-12 20:12:08 +0000 UTC" firstStartedPulling="2025-12-12 20:12:08.777701914 +0000 UTC m=+6.777664319" lastFinishedPulling="2025-12-12 20:12:15.418354831 +0000 UTC m=+13.418317241" observedRunningTime="2025-12-12 20:12:16.212651473 +0000 UTC m=+14.212613885" watchObservedRunningTime="2025-12-12 20:12:16.212775303 +0000 UTC m=+14.212737743"
	Dec 12 20:12:18 embed-certs-399565 kubelet[722]: I1212 20:12:18.443224     722 scope.go:117] "RemoveContainer" containerID="3e03d3a5fc0cd47838c3636121206d85d238ade4100760dd8db8e3d4a6026715"
	Dec 12 20:12:18 embed-certs-399565 kubelet[722]: E1212 20:12:18.443472     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8zdjv_kubernetes-dashboard(3a076791-6f20-4953-8a2f-940ff88fc5af)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zdjv" podUID="3a076791-6f20-4953-8a2f-940ff88fc5af"
	Dec 12 20:12:29 embed-certs-399565 kubelet[722]: I1212 20:12:29.099791     722 scope.go:117] "RemoveContainer" containerID="3e03d3a5fc0cd47838c3636121206d85d238ade4100760dd8db8e3d4a6026715"
	Dec 12 20:12:29 embed-certs-399565 kubelet[722]: I1212 20:12:29.204593     722 scope.go:117] "RemoveContainer" containerID="3e03d3a5fc0cd47838c3636121206d85d238ade4100760dd8db8e3d4a6026715"
	Dec 12 20:12:29 embed-certs-399565 kubelet[722]: I1212 20:12:29.204823     722 scope.go:117] "RemoveContainer" containerID="1c049a6cbb92a22cb990bedef4160dcca97ade070c72be866762422db46af0ee"
	Dec 12 20:12:29 embed-certs-399565 kubelet[722]: E1212 20:12:29.205017     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8zdjv_kubernetes-dashboard(3a076791-6f20-4953-8a2f-940ff88fc5af)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zdjv" podUID="3a076791-6f20-4953-8a2f-940ff88fc5af"
	Dec 12 20:12:36 embed-certs-399565 kubelet[722]: I1212 20:12:36.224317     722 scope.go:117] "RemoveContainer" containerID="db7cce6e798bcb16ec89d7b8cb54237dc498e9c99560888d6461c7b2f3a028aa"
	Dec 12 20:12:38 embed-certs-399565 kubelet[722]: I1212 20:12:38.443834     722 scope.go:117] "RemoveContainer" containerID="1c049a6cbb92a22cb990bedef4160dcca97ade070c72be866762422db46af0ee"
	Dec 12 20:12:38 embed-certs-399565 kubelet[722]: E1212 20:12:38.444057     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8zdjv_kubernetes-dashboard(3a076791-6f20-4953-8a2f-940ff88fc5af)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zdjv" podUID="3a076791-6f20-4953-8a2f-940ff88fc5af"
	Dec 12 20:12:51 embed-certs-399565 kubelet[722]: I1212 20:12:51.099501     722 scope.go:117] "RemoveContainer" containerID="1c049a6cbb92a22cb990bedef4160dcca97ade070c72be866762422db46af0ee"
	Dec 12 20:12:51 embed-certs-399565 kubelet[722]: I1212 20:12:51.271267     722 scope.go:117] "RemoveContainer" containerID="1c049a6cbb92a22cb990bedef4160dcca97ade070c72be866762422db46af0ee"
	Dec 12 20:12:51 embed-certs-399565 kubelet[722]: I1212 20:12:51.271555     722 scope.go:117] "RemoveContainer" containerID="a02bc8eea52d1f29eb5e3c408bb339f74803db1963fb930969f4a47fe8df68b7"
	Dec 12 20:12:51 embed-certs-399565 kubelet[722]: E1212 20:12:51.272078     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8zdjv_kubernetes-dashboard(3a076791-6f20-4953-8a2f-940ff88fc5af)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zdjv" podUID="3a076791-6f20-4953-8a2f-940ff88fc5af"
	Dec 12 20:12:52 embed-certs-399565 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 12 20:12:52 embed-certs-399565 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 12 20:12:52 embed-certs-399565 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:12:52 embed-certs-399565 systemd[1]: kubelet.service: Consumed 1.644s CPU time.
	
	
	==> kubernetes-dashboard [cc941461156ed0cf714b01a2f22277d720bedfc937d891d687f0e2e22e6b697a] <==
	2025/12/12 20:12:15 Using namespace: kubernetes-dashboard
	2025/12/12 20:12:15 Using in-cluster config to connect to apiserver
	2025/12/12 20:12:15 Using secret token for csrf signing
	2025/12/12 20:12:15 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/12 20:12:15 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/12 20:12:15 Successful initial request to the apiserver, version: v1.34.2
	2025/12/12 20:12:15 Generating JWE encryption key
	2025/12/12 20:12:15 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/12 20:12:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/12 20:12:15 Initializing JWE encryption key from synchronized object
	2025/12/12 20:12:15 Creating in-cluster Sidecar client
	2025/12/12 20:12:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/12 20:12:15 Serving insecurely on HTTP port: 9090
	2025/12/12 20:12:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/12 20:12:15 Starting overwatch
	
	
	==> storage-provisioner [db7cce6e798bcb16ec89d7b8cb54237dc498e9c99560888d6461c7b2f3a028aa] <==
	I1212 20:12:05.454248       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1212 20:12:35.456479       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e860f29f72e6009ea5335c8f0c2d97f95e18316d247d5e91ebfd2df64073c48c] <==
	I1212 20:12:36.281719       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 20:12:36.290611       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 20:12:36.290659       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1212 20:12:36.293348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:39.748549       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:44.009381       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:47.607781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:50.661646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:53.683931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:53.688365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1212 20:12:53.688561       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 20:12:53.688620       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9ba9b917-4c14-4eae-ad77-6eb4b88284eb", APIVersion:"v1", ResourceVersion:"635", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-399565_a8811eac-b8b5-4196-a0ba-b80c5eebdfc8 became leader
	I1212 20:12:53.688667       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-399565_a8811eac-b8b5-4196-a0ba-b80c5eebdfc8!
	W1212 20:12:53.690523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:53.693419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1212 20:12:53.788790       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-399565_a8811eac-b8b5-4196-a0ba-b80c5eebdfc8!
	W1212 20:12:55.696530       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:55.718717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:57.722578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:57.727169       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-399565 -n embed-certs-399565
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-399565 -n embed-certs-399565: exit status 2 (410.111359ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-399565 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (7.23s)

                                                
                                    

Test pass (353/415)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 4.31
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.2/json-events 3.01
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.07
18 TestDownloadOnly/v1.34.2/DeleteAll 0.21
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.35.0-beta.0/json-events 2.99
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.07
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.21
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.13
29 TestDownloadOnlyKic 0.38
30 TestBinaryMirror 0.79
31 TestOffline 64.18
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 152.74
40 TestAddons/serial/GCPAuth/Namespaces 0.11
41 TestAddons/serial/GCPAuth/FakeCredentials 8.4
57 TestAddons/StoppedEnableDisable 16.61
58 TestCertOptions 26.77
59 TestCertExpiration 214.75
61 TestForceSystemdFlag 26.94
62 TestForceSystemdEnv 27
67 TestErrorSpam/setup 18.4
68 TestErrorSpam/start 0.61
69 TestErrorSpam/status 0.9
70 TestErrorSpam/pause 6.22
71 TestErrorSpam/unpause 4.98
72 TestErrorSpam/stop 8.06
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 69.55
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 6.04
79 TestFunctional/serial/KubeContext 0.04
80 TestFunctional/serial/KubectlGetPods 0.09
83 TestFunctional/serial/CacheCmd/cache/add_remote 2.67
84 TestFunctional/serial/CacheCmd/cache/add_local 1.22
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.45
89 TestFunctional/serial/CacheCmd/cache/delete 0.12
90 TestFunctional/serial/MinikubeKubectlCmd 0.11
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
92 TestFunctional/serial/ExtraConfig 66
93 TestFunctional/serial/ComponentHealth 0.06
94 TestFunctional/serial/LogsCmd 1.11
95 TestFunctional/serial/LogsFileCmd 1.12
96 TestFunctional/serial/InvalidService 4.19
98 TestFunctional/parallel/ConfigCmd 0.46
99 TestFunctional/parallel/DashboardCmd 7.1
100 TestFunctional/parallel/DryRun 0.39
101 TestFunctional/parallel/InternationalLanguage 0.17
102 TestFunctional/parallel/StatusCmd 1.08
106 TestFunctional/parallel/ServiceCmdConnect 8.7
107 TestFunctional/parallel/AddonsCmd 0.15
108 TestFunctional/parallel/PersistentVolumeClaim 18.5
110 TestFunctional/parallel/SSHCmd 0.55
111 TestFunctional/parallel/CpCmd 1.81
112 TestFunctional/parallel/MySQL 23.85
113 TestFunctional/parallel/FileSync 0.28
114 TestFunctional/parallel/CertSync 1.65
118 TestFunctional/parallel/NodeLabels 0.06
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.52
122 TestFunctional/parallel/License 0.4
123 TestFunctional/parallel/ServiceCmd/DeployApp 8.19
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
125 TestFunctional/parallel/ProfileCmd/profile_list 0.49
126 TestFunctional/parallel/MountCmd/any-port 5.65
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
129 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.41
130 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
132 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.19
133 TestFunctional/parallel/MountCmd/specific-port 1.91
134 TestFunctional/parallel/ServiceCmd/List 0.32
135 TestFunctional/parallel/ServiceCmd/JSONOutput 0.36
136 TestFunctional/parallel/MountCmd/VerifyCleanup 1.5
137 TestFunctional/parallel/ServiceCmd/HTTPS 0.35
138 TestFunctional/parallel/ServiceCmd/Format 0.35
139 TestFunctional/parallel/ServiceCmd/URL 0.35
140 TestFunctional/parallel/Version/short 0.08
141 TestFunctional/parallel/Version/components 0.54
142 TestFunctional/parallel/ImageCommands/ImageListShort 0.65
143 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
144 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
145 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
146 TestFunctional/parallel/ImageCommands/ImageBuild 4.26
147 TestFunctional/parallel/ImageCommands/Setup 0.97
148 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
149 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
153 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
154 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.09
155 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.85
156 TestFunctional/parallel/UpdateContextCmd/no_changes 0.17
157 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
158 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
159 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.24
160 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.36
161 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
162 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.59
163 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.64
164 TestFunctional/delete_echo-server_images 0.03
165 TestFunctional/delete_my-image_image 0.01
166 TestFunctional/delete_minikube_cached_images 0.01
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 39.13
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 5.93
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.04
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.06
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 2.93
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.17
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.27
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.48
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.12
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.12
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.11
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 45.95
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 0.07
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.13
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.13
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 4.05
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.46
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 8.64
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.4
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.17
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 0.98
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 8.5
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.18
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 19.5
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.63
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.82
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 19.98
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.27
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.71
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.06
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.58
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.51
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.56
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 9.24
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 10.16
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.39
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.38
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.11
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.41
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 5.6
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.53
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.53
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.35
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.4
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.38
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.17
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.16
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.15
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.33
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.31
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 6.96
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.39
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.19
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 2.02
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 2.08
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 2
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.23
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.06
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.56
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.39
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.55
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.7
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.49
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.03
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
261 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.01
265 TestMultiControlPlane/serial/StartCluster 165.2
266 TestMultiControlPlane/serial/DeployApp 3.86
267 TestMultiControlPlane/serial/PingHostFromPods 1
268 TestMultiControlPlane/serial/AddWorkerNode 53.86
269 TestMultiControlPlane/serial/NodeLabels 0.06
270 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.84
271 TestMultiControlPlane/serial/CopyFile 16.24
272 TestMultiControlPlane/serial/StopSecondaryNode 19
273 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.67
274 TestMultiControlPlane/serial/RestartSecondaryNode 8.49
275 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.84
276 TestMultiControlPlane/serial/RestartClusterKeepsNodes 190.94
277 TestMultiControlPlane/serial/DeleteSecondaryNode 32.02
278 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.66
279 TestMultiControlPlane/serial/StopCluster 36.01
280 TestMultiControlPlane/serial/RestartCluster 56.51
281 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.65
282 TestMultiControlPlane/serial/AddSecondaryNode 53.41
283 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.86
288 TestJSONOutput/start/Command 39.05
289 TestJSONOutput/start/Audit 0
291 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
292 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
295 TestJSONOutput/pause/Audit 0
297 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
298 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
301 TestJSONOutput/unpause/Audit 0
303 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
304 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
306 TestJSONOutput/stop/Command 6.13
307 TestJSONOutput/stop/Audit 0
309 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
310 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
311 TestErrorJSONOutput 0.22
313 TestKicCustomNetwork/create_custom_network 28.32
314 TestKicCustomNetwork/use_default_bridge_network 24.63
315 TestKicExistingNetwork 24.78
316 TestKicCustomSubnet 24.25
317 TestKicStaticIP 22.01
318 TestMainNoArgs 0.06
319 TestMinikubeProfile 46.33
322 TestMountStart/serial/StartWithMountFirst 4.69
323 TestMountStart/serial/VerifyMountFirst 0.26
324 TestMountStart/serial/StartWithMountSecond 7.56
325 TestMountStart/serial/VerifyMountSecond 0.26
326 TestMountStart/serial/DeleteFirst 1.65
327 TestMountStart/serial/VerifyMountPostDelete 0.26
328 TestMountStart/serial/Stop 1.24
329 TestMountStart/serial/RestartStopped 7.22
330 TestMountStart/serial/VerifyMountPostStop 0.25
333 TestMultiNode/serial/FreshStart2Nodes 64.57
334 TestMultiNode/serial/DeployApp2Nodes 2.79
335 TestMultiNode/serial/PingHostFrom2Pods 0.67
336 TestMultiNode/serial/AddNode 24.7
337 TestMultiNode/serial/MultiNodeLabels 0.06
338 TestMultiNode/serial/ProfileList 0.62
339 TestMultiNode/serial/CopyFile 9.32
340 TestMultiNode/serial/StopNode 2.19
341 TestMultiNode/serial/StartAfterStop 7.02
342 TestMultiNode/serial/RestartKeepsNodes 56.58
343 TestMultiNode/serial/DeleteNode 4.88
344 TestMultiNode/serial/StopMultiNode 28.44
345 TestMultiNode/serial/RestartMultiNode 25.29
346 TestMultiNode/serial/ValidateNameConflict 21.73
351 TestPreload 82.23
353 TestScheduledStopUnix 98.06
356 TestInsufficientStorage 8.58
357 TestRunningBinaryUpgrade 315.15
359 TestKubernetesUpgrade 304.88
360 TestMissingContainerUpgrade 97.33
362 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
363 TestNoKubernetes/serial/StartWithK8s 42.48
364 TestNoKubernetes/serial/StartWithStopK8s 23.33
372 TestNoKubernetes/serial/Start 5.76
373 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
374 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
375 TestNoKubernetes/serial/ProfileList 31.02
383 TestNetworkPlugins/group/false 3.46
387 TestNoKubernetes/serial/Stop 1.28
389 TestPause/serial/Start 41.73
390 TestNoKubernetes/serial/StartNoArgs 6.37
391 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
392 TestPause/serial/SecondStartNoReconfiguration 6.17
394 TestStoppedBinaryUpgrade/Setup 0.74
395 TestStoppedBinaryUpgrade/Upgrade 282.79
397 TestStartStop/group/old-k8s-version/serial/FirstStart 50.11
399 TestStartStop/group/no-preload/serial/FirstStart 45.18
400 TestStartStop/group/old-k8s-version/serial/DeployApp 7.29
401 TestStartStop/group/no-preload/serial/DeployApp 8.25
403 TestStartStop/group/old-k8s-version/serial/Stop 15.93
405 TestStartStop/group/no-preload/serial/Stop 18.09
406 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
407 TestStartStop/group/old-k8s-version/serial/SecondStart 45.14
408 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
409 TestStartStop/group/no-preload/serial/SecondStart 48.59
410 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
411 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.08
412 TestStoppedBinaryUpgrade/MinikubeLogs 1.03
414 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 41.67
415 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
417 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
418 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
420 TestStartStop/group/newest-cni/serial/FirstStart 27.98
422 TestStartStop/group/embed-certs/serial/FirstStart 45.53
423 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.36
425 TestNetworkPlugins/group/auto/Start 43.07
426 TestStartStop/group/newest-cni/serial/DeployApp 0
428 TestStartStop/group/newest-cni/serial/Stop 2.51
429 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
430 TestStartStop/group/newest-cni/serial/SecondStart 10.31
431 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.32
433 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
434 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
435 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
437 TestStartStop/group/default-k8s-diff-port/serial/Stop 16.57
438 TestStartStop/group/embed-certs/serial/DeployApp 8.3
439 TestNetworkPlugins/group/kindnet/Start 42.4
441 TestStartStop/group/embed-certs/serial/Stop 16.36
442 TestNetworkPlugins/group/auto/KubeletFlags 0.27
443 TestNetworkPlugins/group/auto/NetCatPod 9.18
444 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
445 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 46.01
446 TestNetworkPlugins/group/auto/DNS 0.14
447 TestNetworkPlugins/group/auto/Localhost 0.1
448 TestNetworkPlugins/group/auto/HairPin 0.1
449 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.27
450 TestStartStop/group/embed-certs/serial/SecondStart 46.13
451 TestNetworkPlugins/group/calico/Start 52.22
452 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
453 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
454 TestNetworkPlugins/group/kindnet/NetCatPod 9.19
455 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
456 TestNetworkPlugins/group/kindnet/DNS 0.11
457 TestNetworkPlugins/group/kindnet/Localhost 0.09
458 TestNetworkPlugins/group/kindnet/HairPin 0.09
459 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
460 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
462 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
463 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
464 TestNetworkPlugins/group/custom-flannel/Start 52.78
465 TestNetworkPlugins/group/enable-default-cni/Start 68.14
466 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.47
468 TestNetworkPlugins/group/calico/ControllerPod 6.01
469 TestNetworkPlugins/group/bridge/Start 40.48
470 TestNetworkPlugins/group/calico/KubeletFlags 0.29
471 TestNetworkPlugins/group/calico/NetCatPod 10.27
472 TestNetworkPlugins/group/calico/DNS 0.15
473 TestNetworkPlugins/group/calico/Localhost 0.13
474 TestNetworkPlugins/group/calico/HairPin 0.12
475 TestNetworkPlugins/group/flannel/Start 47.53
476 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
477 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.73
478 TestNetworkPlugins/group/bridge/KubeletFlags 0.42
479 TestNetworkPlugins/group/bridge/NetCatPod 9.23
480 TestNetworkPlugins/group/custom-flannel/DNS 0.12
481 TestNetworkPlugins/group/custom-flannel/Localhost 0.09
482 TestNetworkPlugins/group/custom-flannel/HairPin 0.09
483 TestNetworkPlugins/group/bridge/DNS 0.11
484 TestNetworkPlugins/group/bridge/Localhost 0.1
485 TestNetworkPlugins/group/bridge/HairPin 0.09
486 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
487 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.23
488 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
489 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
490 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
491 TestNetworkPlugins/group/flannel/ControllerPod 6.01
492 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
493 TestNetworkPlugins/group/flannel/NetCatPod 9.16
494 TestNetworkPlugins/group/flannel/DNS 0.11
495 TestNetworkPlugins/group/flannel/Localhost 0.08
496 TestNetworkPlugins/group/flannel/HairPin 0.08
x
+
TestDownloadOnly/v1.28.0/json-events (4.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-122070 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-122070 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.308501874s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (4.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1212 19:28:29.062038    9254 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1212 19:28:29.062125    9254 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-122070
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-122070: exit status 85 (69.220998ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-122070 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-122070 │ jenkins │ v1.37.0 │ 12 Dec 25 19:28 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 19:28:24
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 19:28:24.803025    9266 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:28:24.803263    9266 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:28:24.803284    9266 out.go:374] Setting ErrFile to fd 2...
	I1212 19:28:24.803291    9266 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:28:24.803459    9266 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	W1212 19:28:24.803589    9266 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22112-5703/.minikube/config/config.json: open /home/jenkins/minikube-integration/22112-5703/.minikube/config/config.json: no such file or directory
	I1212 19:28:24.804078    9266 out.go:368] Setting JSON to true
	I1212 19:28:24.804891    9266 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":652,"bootTime":1765567053,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 19:28:24.804956    9266 start.go:143] virtualization: kvm guest
	I1212 19:28:24.809705    9266 out.go:99] [download-only-122070] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1212 19:28:24.809828    9266 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball: no such file or directory
	I1212 19:28:24.809886    9266 notify.go:221] Checking for updates...
	I1212 19:28:24.811207    9266 out.go:171] MINIKUBE_LOCATION=22112
	I1212 19:28:24.812388    9266 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 19:28:24.813561    9266 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 19:28:24.814731    9266 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-5703/.minikube
	I1212 19:28:24.815870    9266 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1212 19:28:24.817872    9266 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 19:28:24.818074    9266 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 19:28:24.840439    9266 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1212 19:28:24.840537    9266 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:28:25.053049    9266 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-12 19:28:25.044259292 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 19:28:25.053163    9266 docker.go:319] overlay module found
	I1212 19:28:25.054833    9266 out.go:99] Using the docker driver based on user configuration
	I1212 19:28:25.054860    9266 start.go:309] selected driver: docker
	I1212 19:28:25.054868    9266 start.go:927] validating driver "docker" against <nil>
	I1212 19:28:25.054952    9266 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:28:25.109984    9266 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-12 19:28:25.100499442 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 19:28:25.110236    9266 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 19:28:25.110953    9266 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1212 19:28:25.111162    9266 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1212 19:28:25.112791    9266 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-122070 host does not exist
	  To start a cluster, run: "minikube start -p download-only-122070"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-122070
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (3.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-990185 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-990185 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.0103397s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (3.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1212 19:28:32.483675    9254 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1212 19:28:32.483715    9254 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-990185
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-990185: exit status 85 (68.581238ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-122070 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-122070 │ jenkins │ v1.37.0 │ 12 Dec 25 19:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 12 Dec 25 19:28 UTC │ 12 Dec 25 19:28 UTC │
	│ delete  │ -p download-only-122070                                                                                                                                                   │ download-only-122070 │ jenkins │ v1.37.0 │ 12 Dec 25 19:28 UTC │ 12 Dec 25 19:28 UTC │
	│ start   │ -o=json --download-only -p download-only-990185 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-990185 │ jenkins │ v1.37.0 │ 12 Dec 25 19:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 19:28:29
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 19:28:29.524263    9631 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:28:29.524510    9631 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:28:29.524521    9631 out.go:374] Setting ErrFile to fd 2...
	I1212 19:28:29.524527    9631 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:28:29.524722    9631 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 19:28:29.525180    9631 out.go:368] Setting JSON to true
	I1212 19:28:29.526028    9631 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":656,"bootTime":1765567053,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 19:28:29.526075    9631 start.go:143] virtualization: kvm guest
	I1212 19:28:29.527894    9631 out.go:99] [download-only-990185] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 19:28:29.528043    9631 notify.go:221] Checking for updates...
	I1212 19:28:29.529205    9631 out.go:171] MINIKUBE_LOCATION=22112
	I1212 19:28:29.530482    9631 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 19:28:29.531704    9631 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 19:28:29.532816    9631 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-5703/.minikube
	I1212 19:28:29.533985    9631 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1212 19:28:29.536242    9631 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 19:28:29.536469    9631 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 19:28:29.558498    9631 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1212 19:28:29.558621    9631 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:28:29.610973    9631 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-12 19:28:29.602343602 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 19:28:29.611071    9631 docker.go:319] overlay module found
	I1212 19:28:29.612412    9631 out.go:99] Using the docker driver based on user configuration
	I1212 19:28:29.612437    9631 start.go:309] selected driver: docker
	I1212 19:28:29.612446    9631 start.go:927] validating driver "docker" against <nil>
	I1212 19:28:29.612520    9631 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:28:29.667217    9631 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-12 19:28:29.658979809 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 19:28:29.667425    9631 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 19:28:29.667860    9631 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1212 19:28:29.667982    9631 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1212 19:28:29.669439    9631 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-990185 host does not exist
	  To start a cluster, run: "minikube start -p download-only-990185"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-990185
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (2.99s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-573235 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-573235 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (2.986488069s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (2.99s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1212 19:28:35.879425    9254 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
I1212 19:28:35.879460    9254 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-5703/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-573235
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-573235: exit status 85 (69.113501ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                       ARGS                                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-122070 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-122070 │ jenkins │ v1.37.0 │ 12 Dec 25 19:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 12 Dec 25 19:28 UTC │ 12 Dec 25 19:28 UTC │
	│ delete  │ -p download-only-122070                                                                                                                                                          │ download-only-122070 │ jenkins │ v1.37.0 │ 12 Dec 25 19:28 UTC │ 12 Dec 25 19:28 UTC │
	│ start   │ -o=json --download-only -p download-only-990185 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-990185 │ jenkins │ v1.37.0 │ 12 Dec 25 19:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 12 Dec 25 19:28 UTC │ 12 Dec 25 19:28 UTC │
	│ delete  │ -p download-only-990185                                                                                                                                                          │ download-only-990185 │ jenkins │ v1.37.0 │ 12 Dec 25 19:28 UTC │ 12 Dec 25 19:28 UTC │
	│ start   │ -o=json --download-only -p download-only-573235 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-573235 │ jenkins │ v1.37.0 │ 12 Dec 25 19:28 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 19:28:32
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 19:28:32.941517    9975 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:28:32.941707    9975 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:28:32.941715    9975 out.go:374] Setting ErrFile to fd 2...
	I1212 19:28:32.941719    9975 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:28:32.941941    9975 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 19:28:32.942372    9975 out.go:368] Setting JSON to true
	I1212 19:28:32.943112    9975 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":660,"bootTime":1765567053,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 19:28:32.943161    9975 start.go:143] virtualization: kvm guest
	I1212 19:28:32.944827    9975 out.go:99] [download-only-573235] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 19:28:32.944946    9975 notify.go:221] Checking for updates...
	I1212 19:28:32.946178    9975 out.go:171] MINIKUBE_LOCATION=22112
	I1212 19:28:32.947400    9975 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 19:28:32.948528    9975 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 19:28:32.949809    9975 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-5703/.minikube
	I1212 19:28:32.951170    9975 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1212 19:28:32.953307    9975 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 19:28:32.953511    9975 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 19:28:32.974683    9975 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1212 19:28:32.974787    9975 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:28:33.026848    9975 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-12-12 19:28:33.017755238 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 19:28:33.026939    9975 docker.go:319] overlay module found
	I1212 19:28:33.028209    9975 out.go:99] Using the docker driver based on user configuration
	I1212 19:28:33.028233    9975 start.go:309] selected driver: docker
	I1212 19:28:33.028238    9975 start.go:927] validating driver "docker" against <nil>
	I1212 19:28:33.028321    9975 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:28:33.077505    9975 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-12-12 19:28:33.068908661 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 19:28:33.077639    9975 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 19:28:33.078085    9975 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1212 19:28:33.078216    9975 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1212 19:28:33.079599    9975 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-573235 host does not exist
	  To start a cluster, run: "minikube start -p download-only-573235"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-573235
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.38s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-465015 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "download-docker-465015" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-465015
--- PASS: TestDownloadOnlyKic (0.38s)

                                                
                                    
x
+
TestBinaryMirror (0.79s)

                                                
                                                
=== RUN   TestBinaryMirror
I1212 19:28:37.064177    9254 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-608278 --alsologtostderr --binary-mirror http://127.0.0.1:36999 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-608278" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-608278
--- PASS: TestBinaryMirror (0.79s)

                                                
                                    
x
+
TestOffline (64.18s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-540268 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-540268 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m1.788524896s)
helpers_test.go:176: Cleaning up "offline-crio-540268" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-540268
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-540268: (2.38941268s)
--- PASS: TestOffline (64.18s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-410014
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-410014: exit status 85 (64.048796ms)

                                                
                                                
-- stdout --
	* Profile "addons-410014" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-410014"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-410014
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-410014: exit status 85 (64.785134ms)

                                                
                                                
-- stdout --
	* Profile "addons-410014" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-410014"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (152.74s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-410014 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-410014 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m32.73649374s)
--- PASS: TestAddons/Setup (152.74s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-410014 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-410014 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.4s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-410014 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-410014 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [616944ae-2125-4437-bf51-6aa3067feb79] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [616944ae-2125-4437-bf51-6aa3067feb79] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003204554s
addons_test.go:696: (dbg) Run:  kubectl --context addons-410014 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-410014 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-410014 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.40s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.61s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-410014
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-410014: (16.336144342s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-410014
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-410014
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-410014
--- PASS: TestAddons/StoppedEnableDisable (16.61s)

                                                
                                    
x
+
TestCertOptions (26.77s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-427408 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-427408 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (23.644292018s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-427408 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-427408 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-427408 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-427408" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-427408
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-427408: (2.392994024s)
--- PASS: TestCertOptions (26.77s)

                                                
                                    
x
+
TestCertExpiration (214.75s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-070436 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
E1212 20:05:00.416843    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/functional-853944/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-070436 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (25.697606664s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-070436 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-070436 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (6.639190622s)
helpers_test.go:176: Cleaning up "cert-expiration-070436" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-070436
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-070436: (2.410818249s)
--- PASS: TestCertExpiration (214.75s)

                                                
                                    
x
+
TestForceSystemdFlag (26.94s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-012185 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-012185 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (24.282255745s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-012185 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-012185" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-012185
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-012185: (2.387666546s)
--- PASS: TestForceSystemdFlag (26.94s)

                                                
                                    
x
+
TestForceSystemdEnv (27s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-361023 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-361023 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (24.489214222s)
helpers_test.go:176: Cleaning up "force-systemd-env-361023" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-361023
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-361023: (2.503829944s)
--- PASS: TestForceSystemdEnv (27.00s)

                                                
                                    
x
+
TestErrorSpam/setup (18.4s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-756806 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-756806 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-756806 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-756806 --driver=docker  --container-runtime=crio: (18.402939027s)
--- PASS: TestErrorSpam/setup (18.40s)

                                                
                                    
x
+
TestErrorSpam/start (0.61s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-756806 --log_dir /tmp/nospam-756806 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-756806 --log_dir /tmp/nospam-756806 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-756806 --log_dir /tmp/nospam-756806 start --dry-run
--- PASS: TestErrorSpam/start (0.61s)

                                                
                                    
x
+
TestErrorSpam/status (0.9s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-756806 --log_dir /tmp/nospam-756806 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-756806 --log_dir /tmp/nospam-756806 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-756806 --log_dir /tmp/nospam-756806 status
--- PASS: TestErrorSpam/status (0.90s)

                                                
                                    
x
+
TestErrorSpam/pause (6.22s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-756806 --log_dir /tmp/nospam-756806 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-756806 --log_dir /tmp/nospam-756806 pause: exit status 80 (1.998923121s)

                                                
                                                
-- stdout --
	* Pausing node nospam-756806 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T19:34:36Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-756806 --log_dir /tmp/nospam-756806 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-756806 --log_dir /tmp/nospam-756806 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-756806 --log_dir /tmp/nospam-756806 pause: exit status 80 (2.147247306s)

                                                
                                                
-- stdout --
	* Pausing node nospam-756806 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T19:34:38Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-756806 --log_dir /tmp/nospam-756806 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-756806 --log_dir /tmp/nospam-756806 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-756806 --log_dir /tmp/nospam-756806 pause: exit status 80 (2.0712963s)

                                                
                                                
-- stdout --
	* Pausing node nospam-756806 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T19:34:40Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-756806 --log_dir /tmp/nospam-756806 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.22s)

                                                
                                    
x
+
TestErrorSpam/unpause (4.98s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-756806 --log_dir /tmp/nospam-756806 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-756806 --log_dir /tmp/nospam-756806 unpause: exit status 80 (1.551612818s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-756806 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T19:34:42Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-756806 --log_dir /tmp/nospam-756806 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-756806 --log_dir /tmp/nospam-756806 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-756806 --log_dir /tmp/nospam-756806 unpause: exit status 80 (1.890776859s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-756806 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T19:34:43Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-756806 --log_dir /tmp/nospam-756806 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-756806 --log_dir /tmp/nospam-756806 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-756806 --log_dir /tmp/nospam-756806 unpause: exit status 80 (1.539623521s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-756806 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T19:34:45Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-756806 --log_dir /tmp/nospam-756806 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (4.98s)

                                                
                                    
x
+
TestErrorSpam/stop (8.06s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-756806 --log_dir /tmp/nospam-756806 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-756806 --log_dir /tmp/nospam-756806 stop: (7.870115166s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-756806 --log_dir /tmp/nospam-756806 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-756806 --log_dir /tmp/nospam-756806 stop
--- PASS: TestErrorSpam/stop (8.06s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/test/nested/copy/9254/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (69.55s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-828160 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-828160 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m9.546238961s)
--- PASS: TestFunctional/serial/StartWithProxy (69.55s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.04s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1212 19:36:07.147050    9254 config.go:182] Loaded profile config "functional-828160": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-828160 --alsologtostderr -v=8
E1212 19:36:11.445995    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:36:11.452339    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:36:11.463670    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:36:11.484989    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:36:11.526840    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:36:11.608939    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:36:11.770719    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:36:12.092768    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:36:12.735010    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-828160 --alsologtostderr -v=8: (6.038742604s)
functional_test.go:678: soft start took 6.039433447s for "functional-828160" cluster.
I1212 19:36:13.186140    9254 config.go:182] Loaded profile config "functional-828160": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (6.04s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-828160 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 cache add registry.k8s.io/pause:3.1
E1212 19:36:14.016422    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-828160 /tmp/TestFunctionalserialCacheCmdcacheadd_local2998830906/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 cache add minikube-local-cache-test:functional-828160
E1212 19:36:16.578412    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 cache delete minikube-local-cache-test:functional-828160
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-828160
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-828160 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (269.067014ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 kubectl -- --context functional-828160 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-828160 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (66s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-828160 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1212 19:36:21.699728    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:36:31.942024    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:36:52.423424    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-828160 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m6.000018084s)
functional_test.go:776: restart took 1m6.00015238s for "functional-828160" cluster.
I1212 19:37:25.393775    9254 config.go:182] Loaded profile config "functional-828160": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (66.00s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-828160 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-828160 logs: (1.110561566s)
--- PASS: TestFunctional/serial/LogsCmd (1.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 logs --file /tmp/TestFunctionalserialLogsFileCmd1361034918/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-828160 logs --file /tmp/TestFunctionalserialLogsFileCmd1361034918/001/logs.txt: (1.122940126s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.12s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.19s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-828160 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-828160
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-828160: exit status 115 (325.014929ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32635 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-828160 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.19s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-828160 config get cpus: exit status 14 (96.335328ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-828160 config get cpus: exit status 14 (72.549492ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-828160 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-828160 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 48435: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.10s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-828160 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-828160 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (163.572941ms)

                                                
                                                
-- stdout --
	* [functional-828160] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22112-5703/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-5703/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 19:37:45.787513   47536 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:37:45.787611   47536 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:37:45.787619   47536 out.go:374] Setting ErrFile to fd 2...
	I1212 19:37:45.787623   47536 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:37:45.787806   47536 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 19:37:45.788189   47536 out.go:368] Setting JSON to false
	I1212 19:37:45.789084   47536 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1213,"bootTime":1765567053,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 19:37:45.789148   47536 start.go:143] virtualization: kvm guest
	I1212 19:37:45.791225   47536 out.go:179] * [functional-828160] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 19:37:45.792298   47536 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 19:37:45.792298   47536 notify.go:221] Checking for updates...
	I1212 19:37:45.793648   47536 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 19:37:45.794780   47536 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 19:37:45.795805   47536 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-5703/.minikube
	I1212 19:37:45.796787   47536 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 19:37:45.801429   47536 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 19:37:45.802862   47536 config.go:182] Loaded profile config "functional-828160": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:37:45.803340   47536 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 19:37:45.828787   47536 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1212 19:37:45.828869   47536 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:37:45.887682   47536 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-12 19:37:45.878102286 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 19:37:45.887777   47536 docker.go:319] overlay module found
	I1212 19:37:45.889199   47536 out.go:179] * Using the docker driver based on existing profile
	I1212 19:37:45.890293   47536 start.go:309] selected driver: docker
	I1212 19:37:45.890306   47536 start.go:927] validating driver "docker" against &{Name:functional-828160 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-828160 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:37:45.890403   47536 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 19:37:45.891864   47536 out.go:203] 
	W1212 19:37:45.892918   47536 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1212 19:37:45.894221   47536 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-828160 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-828160 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-828160 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (166.002181ms)

                                                
                                                
-- stdout --
	* [functional-828160] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22112-5703/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-5703/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 19:37:46.181416   47790 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:37:46.181512   47790 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:37:46.181520   47790 out.go:374] Setting ErrFile to fd 2...
	I1212 19:37:46.181524   47790 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:37:46.181804   47790 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 19:37:46.182197   47790 out.go:368] Setting JSON to false
	I1212 19:37:46.183060   47790 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1213,"bootTime":1765567053,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 19:37:46.183117   47790 start.go:143] virtualization: kvm guest
	I1212 19:37:46.184742   47790 out.go:179] * [functional-828160] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1212 19:37:46.185897   47790 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 19:37:46.185899   47790 notify.go:221] Checking for updates...
	I1212 19:37:46.188737   47790 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 19:37:46.189918   47790 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 19:37:46.191019   47790 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-5703/.minikube
	I1212 19:37:46.192002   47790 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 19:37:46.192997   47790 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 19:37:46.194544   47790 config.go:182] Loaded profile config "functional-828160": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:37:46.195052   47790 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 19:37:46.218637   47790 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1212 19:37:46.218712   47790 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:37:46.279115   47790 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-12 19:37:46.268368231 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 19:37:46.279216   47790 docker.go:319] overlay module found
	I1212 19:37:46.281527   47790 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1212 19:37:46.282634   47790 start.go:309] selected driver: docker
	I1212 19:37:46.282647   47790 start.go:927] validating driver "docker" against &{Name:functional-828160 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-828160 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:37:46.282735   47790 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 19:37:46.284434   47790 out.go:203] 
	W1212 19:37:46.285543   47790 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1212 19:37:46.286564   47790 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-828160 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-828160 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-7fxcw" [18d56d28-43d4-4624-b95a-087349996e02] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-7fxcw" [18d56d28-43d4-4624-b95a-087349996e02] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.002801775s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31086
functional_test.go:1680: http://192.168.49.2:31086: success! body:
Request served by hello-node-connect-7d85dfc575-7fxcw

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31086
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.70s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (18.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [1da56391-8979-4fbd-9f27-d055b4e698a2] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003827357s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-828160 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-828160 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-828160 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-828160 apply -f testdata/storage-provisioner/pod.yaml
I1212 19:37:39.049422    9254 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [7537e1ad-8f47-4fae-b752-91c38667f01a] Pending
helpers_test.go:353: "sp-pod" [7537e1ad-8f47-4fae-b752-91c38667f01a] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004263976s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-828160 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-828160 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-828160 apply -f testdata/storage-provisioner/pod.yaml
I1212 19:37:46.063017    9254 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [25f1be46-044f-4eed-b838-2e0682ac0397] Pending
helpers_test.go:353: "sp-pod" [25f1be46-044f-4eed-b838-2e0682ac0397] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004022617s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-828160 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (18.50s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 ssh -n functional-828160 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 cp functional-828160:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd96724273/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 ssh -n functional-828160 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 ssh -n functional-828160 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-828160 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-6bcdcbc558-svg4r" [3937ffcb-fca5-4b85-880d-2be3b930b13e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-6bcdcbc558-svg4r" [3937ffcb-fca5-4b85-880d-2be3b930b13e] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.017717255s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-828160 exec mysql-6bcdcbc558-svg4r -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-828160 exec mysql-6bcdcbc558-svg4r -- mysql -ppassword -e "show databases;": exit status 1 (106.579064ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1212 19:38:06.503803    9254 retry.go:31] will retry after 1.12433129s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-828160 exec mysql-6bcdcbc558-svg4r -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-828160 exec mysql-6bcdcbc558-svg4r -- mysql -ppassword -e "show databases;": exit status 1 (87.332297ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1212 19:38:07.716444    9254 retry.go:31] will retry after 2.17010271s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-828160 exec mysql-6bcdcbc558-svg4r -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-828160 exec mysql-6bcdcbc558-svg4r -- mysql -ppassword -e "show databases;": exit status 1 (83.048568ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1212 19:38:09.971957    9254 retry.go:31] will retry after 3.01898852s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-828160 exec mysql-6bcdcbc558-svg4r -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.85s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/9254/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 ssh "sudo cat /etc/test/nested/copy/9254/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/9254.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 ssh "sudo cat /etc/ssl/certs/9254.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/9254.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 ssh "sudo cat /usr/share/ca-certificates/9254.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/92542.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 ssh "sudo cat /etc/ssl/certs/92542.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/92542.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 ssh "sudo cat /usr/share/ca-certificates/92542.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-828160 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-828160 ssh "sudo systemctl is-active docker": exit status 1 (256.463251ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-828160 ssh "sudo systemctl is-active containerd": exit status 1 (263.308639ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-828160 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-828160 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-9bfgv" [de65c289-61a7-46d4-94f2-cc576352d585] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-9bfgv" [de65c289-61a7-46d4-94f2-cc576352d585] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.002637856s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.19s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "419.500821ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "71.716831ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-828160 /tmp/TestFunctionalparallelMountCmdany-port2407026076/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765568252965162618" to /tmp/TestFunctionalparallelMountCmdany-port2407026076/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765568252965162618" to /tmp/TestFunctionalparallelMountCmdany-port2407026076/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765568252965162618" to /tmp/TestFunctionalparallelMountCmdany-port2407026076/001/test-1765568252965162618
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-828160 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (328.322202ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 19:37:33.293931    9254 retry.go:31] will retry after 385.924274ms: exit status 1
E1212 19:37:33.385717    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 12 19:37 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 12 19:37 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 12 19:37 test-1765568252965162618
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 ssh cat /mount-9p/test-1765568252965162618
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-828160 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [a5d88719-1b70-4c15-8103-01f6f7cc95dc] Pending
helpers_test.go:353: "busybox-mount" [a5d88719-1b70-4c15-8103-01f6f7cc95dc] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [a5d88719-1b70-4c15-8103-01f6f7cc95dc] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [a5d88719-1b70-4c15-8103-01f6f7cc95dc] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.003191675s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-828160 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-828160 /tmp/TestFunctionalparallelMountCmdany-port2407026076/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.65s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "340.027211ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "57.276319ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-828160 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-828160 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-828160 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-828160 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 43419: os: process already finished
helpers_test.go:520: unable to terminate pid 43122: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-828160 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-828160 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [6e6f3e77-0a71-4509-bdbb-48c6bfcebb25] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [6e6f3e77-0a71-4509-bdbb-48c6bfcebb25] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003534667s
I1212 19:37:43.845093    9254 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.19s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-828160 /tmp/TestFunctionalparallelMountCmdspecific-port316817695/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-828160 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (290.036813ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 19:37:38.900932    9254 retry.go:31] will retry after 629.600919ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-828160 /tmp/TestFunctionalparallelMountCmdspecific-port316817695/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-828160 ssh "sudo umount -f /mount-9p": exit status 1 (266.688857ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-828160 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-828160 /tmp/TestFunctionalparallelMountCmdspecific-port316817695/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 service list -o json
functional_test.go:1504: Took "354.974353ms" to run "out/minikube-linux-amd64 -p functional-828160 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-828160 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1519991113/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-828160 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1519991113/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-828160 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1519991113/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-828160 ssh "findmnt -T" /mount1: exit status 1 (347.764091ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 19:37:40.873082    9254 retry.go:31] will retry after 255.703345ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-828160 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-828160 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1519991113/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-828160 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1519991113/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-828160 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1519991113/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30363
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30363
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-828160 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
localhost/minikube-local-cache-test:functional-828160
localhost/kicbase/echo-server:functional-828160
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-828160 image ls --format short --alsologtostderr:
I1212 19:37:51.848976   49480 out.go:360] Setting OutFile to fd 1 ...
I1212 19:37:51.849292   49480 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:37:51.849310   49480 out.go:374] Setting ErrFile to fd 2...
I1212 19:37:51.849316   49480 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:37:51.849631   49480 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
I1212 19:37:51.850391   49480 config.go:182] Loaded profile config "functional-828160": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 19:37:51.850536   49480 config.go:182] Loaded profile config "functional-828160": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 19:37:51.851125   49480 cli_runner.go:164] Run: docker container inspect functional-828160 --format={{.State.Status}}
I1212 19:37:51.873845   49480 ssh_runner.go:195] Run: systemctl --version
I1212 19:37:51.873898   49480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-828160
I1212 19:37:51.896180   49480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/functional-828160/id_rsa Username:docker}
I1212 19:37:52.000598   49480 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-828160 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ public.ecr.aws/nginx/nginx              │ alpine             │ a236f84b9d5d2 │ 55.2MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ a5f569d49a979 │ 89MB   │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server           │ functional-828160  │ 9056ab77afb8e │ 4.95MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 01e8bacf0f500 │ 76MB   │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 88320b5498ff2 │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ localhost/minikube-local-cache-test     │ functional-828160  │ db5de78114ee9 │ 3.33kB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 8aa150647e88a │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-828160 image ls --format table --alsologtostderr:
I1212 19:37:52.721197   49788 out.go:360] Setting OutFile to fd 1 ...
I1212 19:37:52.721473   49788 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:37:52.721483   49788 out.go:374] Setting ErrFile to fd 2...
I1212 19:37:52.721487   49788 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:37:52.721673   49788 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
I1212 19:37:52.722174   49788 config.go:182] Loaded profile config "functional-828160": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 19:37:52.722256   49788 config.go:182] Loaded profile config "functional-828160": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 19:37:52.722657   49788 cli_runner.go:164] Run: docker container inspect functional-828160 --format={{.State.Status}}
I1212 19:37:52.741852   49788 ssh_runner.go:195] Run: systemctl --version
I1212 19:37:52.741893   49788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-828160
I1212 19:37:52.761786   49788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/functional-828160/id_rsa Username:docker}
I1212 19:37:52.856392   49788 ssh_runner.go:195] Run: sudo crictl images --output json
2025/12/12 19:37:53 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-828160 image ls --format json --alsologtostderr:
[{"id":"8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":["registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"73145240"},{"id":"88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6","registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"53848919"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf6081
9cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"
size":"4631262"},{"id":"a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":["public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff","public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55156597"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568
ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"db5de78114ee9f8262989599de7342156cdde8c5f01d1b76b1ee760d389d9922","repoDigests":["localhost/minikube-local-cache-test@sha256:fb3768d6fabfe22de98ccddf9853d402081275abc3cb9ac8c9d5274340dd22b0"],"repoTags":["localh
ost/minikube-local-cache-test:functional-828160"],"size":"3330"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077","registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"89046001"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ec
ae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-828160"],"size":"4945146"},{"id":"01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb","registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"76004183"},{"id":"0184c1613d92931126feb4c548e5da
11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-828160 image ls --format json --alsologtostderr:
I1212 19:37:52.489988   49622 out.go:360] Setting OutFile to fd 1 ...
I1212 19:37:52.490228   49622 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:37:52.490237   49622 out.go:374] Setting ErrFile to fd 2...
I1212 19:37:52.490240   49622 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:37:52.490462   49622 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
I1212 19:37:52.490946   49622 config.go:182] Loaded profile config "functional-828160": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 19:37:52.491036   49622 config.go:182] Loaded profile config "functional-828160": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 19:37:52.491482   49622 cli_runner.go:164] Run: docker container inspect functional-828160 --format={{.State.Status}}
I1212 19:37:52.508824   49622 ssh_runner.go:195] Run: systemctl --version
I1212 19:37:52.508871   49622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-828160
I1212 19:37:52.528761   49622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/functional-828160/id_rsa Username:docker}
I1212 19:37:52.630132   49622 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-828160 image ls --format yaml --alsologtostderr:
- id: 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests:
- registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "73145240"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
- registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "89046001"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-828160
size: "4945146"
- id: 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
- registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "53848919"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: db5de78114ee9f8262989599de7342156cdde8c5f01d1b76b1ee760d389d9922
repoDigests:
- localhost/minikube-local-cache-test@sha256:fb3768d6fabfe22de98ccddf9853d402081275abc3cb9ac8c9d5274340dd22b0
repoTags:
- localhost/minikube-local-cache-test:functional-828160
size: "3330"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
- registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "76004183"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
- public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55156597"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-828160 image ls --format yaml --alsologtostderr:
I1212 19:37:52.237359   49549 out.go:360] Setting OutFile to fd 1 ...
I1212 19:37:52.237463   49549 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:37:52.237469   49549 out.go:374] Setting ErrFile to fd 2...
I1212 19:37:52.237474   49549 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:37:52.237718   49549 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
I1212 19:37:52.238479   49549 config.go:182] Loaded profile config "functional-828160": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 19:37:52.238616   49549 config.go:182] Loaded profile config "functional-828160": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 19:37:52.239205   49549 cli_runner.go:164] Run: docker container inspect functional-828160 --format={{.State.Status}}
I1212 19:37:52.260892   49549 ssh_runner.go:195] Run: systemctl --version
I1212 19:37:52.260950   49549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-828160
I1212 19:37:52.283416   49549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/functional-828160/id_rsa Username:docker}
I1212 19:37:52.383183   49549 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-828160 ssh pgrep buildkitd: exit status 1 (276.470815ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 image build -t localhost/my-image:functional-828160 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-828160 image build -t localhost/my-image:functional-828160 testdata/build --alsologtostderr: (2.644784107s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-828160 image build -t localhost/my-image:functional-828160 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> f181387ccd0
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-828160
--> 97e159aa3b6
Successfully tagged localhost/my-image:functional-828160
97e159aa3b66a18ac3f185e99040fe28bc0292c72aa6f7bea05e703e27f051a9
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-828160 image build -t localhost/my-image:functional-828160 testdata/build --alsologtostderr:
I1212 19:37:52.764828   49800 out.go:360] Setting OutFile to fd 1 ...
I1212 19:37:52.764991   49800 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:37:52.765005   49800 out.go:374] Setting ErrFile to fd 2...
I1212 19:37:52.765012   49800 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:37:52.765182   49800 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
I1212 19:37:52.765725   49800 config.go:182] Loaded profile config "functional-828160": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 19:37:52.766252   49800 config.go:182] Loaded profile config "functional-828160": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 19:37:52.766723   49800 cli_runner.go:164] Run: docker container inspect functional-828160 --format={{.State.Status}}
I1212 19:37:52.784626   49800 ssh_runner.go:195] Run: systemctl --version
I1212 19:37:52.784676   49800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-828160
I1212 19:37:52.803328   49800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/functional-828160/id_rsa Username:docker}
I1212 19:37:52.898411   49800 build_images.go:162] Building image from path: /tmp/build.107625667.tar
I1212 19:37:52.898469   49800 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1212 19:37:52.905991   49800 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.107625667.tar
I1212 19:37:52.909490   49800 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.107625667.tar: stat -c "%s %y" /var/lib/minikube/build/build.107625667.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.107625667.tar': No such file or directory
I1212 19:37:52.909514   49800 ssh_runner.go:362] scp /tmp/build.107625667.tar --> /var/lib/minikube/build/build.107625667.tar (3072 bytes)
I1212 19:37:52.928072   49800 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.107625667
I1212 19:37:52.935323   49800 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.107625667 -xf /var/lib/minikube/build/build.107625667.tar
I1212 19:37:52.943635   49800 crio.go:315] Building image: /var/lib/minikube/build/build.107625667
I1212 19:37:52.943679   49800 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-828160 /var/lib/minikube/build/build.107625667 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1212 19:37:55.322509   49800 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-828160 /var/lib/minikube/build/build.107625667 --cgroup-manager=cgroupfs: (2.378806706s)
I1212 19:37:55.322577   49800 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.107625667
I1212 19:37:55.332618   49800 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.107625667.tar
I1212 19:37:55.342208   49800 build_images.go:218] Built localhost/my-image:functional-828160 from /tmp/build.107625667.tar
I1212 19:37:55.342240   49800 build_images.go:134] succeeded building to: functional-828160
I1212 19:37:55.342247   49800 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 image ls
functional_test.go:466: (dbg) Done: out/minikube-linux-amd64 -p functional-828160 image ls: (1.337619905s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-828160
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-828160 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.189.16 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-828160 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 image load --daemon kicbase/echo-server:functional-828160 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 image load --daemon kicbase/echo-server:functional-828160 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-828160
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 image load --daemon kicbase/echo-server:functional-828160 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 image save kicbase/echo-server:functional-828160 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 image rm kicbase/echo-server:functional-828160 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-828160
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-828160 image save --daemon kicbase/echo-server:functional-828160 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-828160
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.64s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-828160
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-828160
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-828160
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22112-5703/.minikube/files/etc/test/nested/copy/9254/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (39.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-853944 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-853944 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (39.13106338s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (39.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (5.93s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1212 19:38:55.275569    9254 config.go:182] Loaded profile config "functional-853944": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-853944 --alsologtostderr -v=8
E1212 19:38:55.307418    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-853944 --alsologtostderr -v=8: (5.930937s)
functional_test.go:678: soft start took 5.931283537s for "functional-853944" cluster.
I1212 19:39:01.206811    9254 config.go:182] Loaded profile config "functional-853944": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (5.93s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-853944 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.93s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-853944 cache add registry.k8s.io/pause:3.1: (1.040867125s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-853944 cache add registry.k8s.io/pause:3.3: (1.090739738s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.93s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-853944 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach3452922235/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 cache add minikube-local-cache-test:functional-853944
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 cache delete minikube-local-cache-test:functional-853944
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-853944
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-853944 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (268.880392ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 kubectl -- --context functional-853944 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-853944 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (45.95s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-853944 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-853944 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (45.94847923s)
functional_test.go:776: restart took 45.948580273s for "functional-853944" cluster.
I1212 19:39:53.571066    9254 config.go:182] Loaded profile config "functional-853944": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (45.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-853944 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-853944 logs: (1.128290805s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs3393365146/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-853944 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs3393365146/001/logs.txt: (1.133714898s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-853944 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-853944
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-853944: exit status 115 (323.300009ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31788 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-853944 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-853944 config get cpus: exit status 14 (96.900866ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-853944 config get cpus: exit status 14 (69.567814ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (8.64s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-853944 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-853944 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 63320: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (8.64s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-853944 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-853944 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (171.683108ms)

                                                
                                                
-- stdout --
	* [functional-853944] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22112-5703/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-5703/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 19:40:11.469021   62443 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:40:11.469347   62443 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:40:11.469359   62443 out.go:374] Setting ErrFile to fd 2...
	I1212 19:40:11.469365   62443 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:40:11.469574   62443 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 19:40:11.470017   62443 out.go:368] Setting JSON to false
	I1212 19:40:11.471131   62443 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1358,"bootTime":1765567053,"procs":247,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 19:40:11.471198   62443 start.go:143] virtualization: kvm guest
	I1212 19:40:11.473348   62443 out.go:179] * [functional-853944] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 19:40:11.474853   62443 notify.go:221] Checking for updates...
	I1212 19:40:11.474887   62443 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 19:40:11.476347   62443 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 19:40:11.477861   62443 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 19:40:11.479067   62443 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-5703/.minikube
	I1212 19:40:11.480223   62443 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 19:40:11.481573   62443 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 19:40:11.483539   62443 config.go:182] Loaded profile config "functional-853944": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 19:40:11.484384   62443 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 19:40:11.509950   62443 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1212 19:40:11.510044   62443 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:40:11.567397   62443 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-12 19:40:11.558385203 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 19:40:11.567500   62443 docker.go:319] overlay module found
	I1212 19:40:11.569077   62443 out.go:179] * Using the docker driver based on existing profile
	I1212 19:40:11.570110   62443 start.go:309] selected driver: docker
	I1212 19:40:11.570121   62443 start.go:927] validating driver "docker" against &{Name:functional-853944 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-853944 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:40:11.570201   62443 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 19:40:11.571719   62443 out.go:203] 
	W1212 19:40:11.572714   62443 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1212 19:40:11.573755   62443 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-853944 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-853944 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-853944 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (169.952088ms)

                                                
                                                
-- stdout --
	* [functional-853944] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22112-5703/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-5703/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 19:40:11.290188   62322 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:40:11.290302   62322 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:40:11.290310   62322 out.go:374] Setting ErrFile to fd 2...
	I1212 19:40:11.290318   62322 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:40:11.290603   62322 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 19:40:11.291012   62322 out.go:368] Setting JSON to false
	I1212 19:40:11.291905   62322 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1358,"bootTime":1765567053,"procs":249,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 19:40:11.291954   62322 start.go:143] virtualization: kvm guest
	I1212 19:40:11.293961   62322 out.go:179] * [functional-853944] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1212 19:40:11.295086   62322 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 19:40:11.295073   62322 notify.go:221] Checking for updates...
	I1212 19:40:11.297129   62322 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 19:40:11.298571   62322 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 19:40:11.299701   62322 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-5703/.minikube
	I1212 19:40:11.300828   62322 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 19:40:11.301911   62322 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 19:40:11.303544   62322 config.go:182] Loaded profile config "functional-853944": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 19:40:11.304003   62322 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 19:40:11.332087   62322 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1212 19:40:11.332230   62322 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:40:11.393374   62322 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-12 19:40:11.383375159 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 19:40:11.393476   62322 docker.go:319] overlay module found
	I1212 19:40:11.395011   62322 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1212 19:40:11.396433   62322 start.go:309] selected driver: docker
	I1212 19:40:11.396451   62322 start.go:927] validating driver "docker" against &{Name:functional-853944 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-853944 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:40:11.396563   62322 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 19:40:11.398254   62322 out.go:203] 
	W1212 19:40:11.399410   62322 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1212 19:40:11.400620   62322 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.98s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.98s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (8.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-853944 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-853944 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-9f67c86d4-ltth7" [d6e4de5f-05e6-488c-8d7e-a383a70c38ac] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-9f67c86d4-ltth7" [d6e4de5f-05e6-488c-8d7e-a383a70c38ac] Running
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.002678029s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31848
functional_test.go:1680: http://192.168.49.2:31848: success! body:
Request served by hello-node-connect-9f67c86d4-ltth7

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31848
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (8.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (19.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [b11bcba7-387c-4897-bcfc-7cdabff8c8ea] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.002631619s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-853944 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-853944 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-853944 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-853944 apply -f testdata/storage-provisioner/pod.yaml
I1212 19:40:06.749785    9254 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [447587ec-5344-4757-b241-a9fe36500206] Pending
helpers_test.go:353: "sp-pod" [447587ec-5344-4757-b241-a9fe36500206] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004297755s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-853944 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-853944 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-853944 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [f13f7fa6-77e1-4502-9d89-dcd0d5764791] Pending
helpers_test.go:353: "sp-pod" [f13f7fa6-77e1-4502-9d89-dcd0d5764791] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004337599s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-853944 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (19.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.63s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.63s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.82s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 cp testdata/cp-test.txt /home/docker/cp-test.txt
2025/12/12 19:40:20 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 ssh -n functional-853944 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 cp functional-853944:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp1510341525/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 ssh -n functional-853944 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 ssh -n functional-853944 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.82s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (19.98s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-853944 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-4k4vh" [45911dd1-7673-48a8-8b82-370b29bb8e88] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-7d7b65bc95-4k4vh" [45911dd1-7673-48a8-8b82-370b29bb8e88] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: app=mysql healthy within 15.003309859s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-853944 exec mysql-7d7b65bc95-4k4vh -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-853944 exec mysql-7d7b65bc95-4k4vh -- mysql -ppassword -e "show databases;": exit status 1 (119.896773ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1212 19:40:35.213969    9254 retry.go:31] will retry after 757.72477ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-853944 exec mysql-7d7b65bc95-4k4vh -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-853944 exec mysql-7d7b65bc95-4k4vh -- mysql -ppassword -e "show databases;": exit status 1 (90.665205ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1212 19:40:36.062689    9254 retry.go:31] will retry after 2.239892972s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-853944 exec mysql-7d7b65bc95-4k4vh -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-853944 exec mysql-7d7b65bc95-4k4vh -- mysql -ppassword -e "show databases;": exit status 1 (86.65464ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1212 19:40:38.389590    9254 retry.go:31] will retry after 1.419277644s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-853944 exec mysql-7d7b65bc95-4k4vh -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (19.98s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/9254/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 ssh "sudo cat /etc/test/nested/copy/9254/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.71s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/9254.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 ssh "sudo cat /etc/ssl/certs/9254.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/9254.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 ssh "sudo cat /usr/share/ca-certificates/9254.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 ssh "sudo cat /etc/ssl/certs/51391683.0"
I1212 19:40:13.816390    9254 detect.go:223] nested VM detected
functional_test.go:2004: Checking for existence of /etc/ssl/certs/92542.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 ssh "sudo cat /etc/ssl/certs/92542.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/92542.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 ssh "sudo cat /usr/share/ca-certificates/92542.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.71s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-853944 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-853944 ssh "sudo systemctl is-active docker": exit status 1 (275.094527ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-853944 ssh "sudo systemctl is-active containerd": exit status 1 (301.024252ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-853944 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-853944 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-853944 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-853944 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 60004: os: process already finished
helpers_test.go:526: unable to kill pid 59674: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-853944 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (9.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-853944 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [1afbc0e3-83ed-4729-88b9-4ca55edf5f83] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [1afbc0e3-83ed-4729-88b9-4ca55edf5f83] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003608718s
I1212 19:40:09.749234    9254 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (9.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (10.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-853944 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-853944 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-5758569b79-6mvw8" [b71f8ee0-1446-480b-a9c7-828baa3ae6f5] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-5758569b79-6mvw8" [b71f8ee0-1446-480b-a9c7-828baa3ae6f5] Running
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.003614481s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (10.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "318.543239ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "59.625902ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-853944 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.160.210 is working!
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-853944 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "345.77883ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "59.238191ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (5.6s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-853944 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2952615462/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765568409917044115" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2952615462/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765568409917044115" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2952615462/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765568409917044115" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2952615462/001/test-1765568409917044115
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-853944 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (289.894814ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 19:40:10.207234    9254 retry.go:31] will retry after 265.975124ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 12 19:40 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 12 19:40 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 12 19:40 test-1765568409917044115
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 ssh cat /mount-9p/test-1765568409917044115
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-853944 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [0c380575-cdda-4a2b-9f6f-73651f415202] Pending
helpers_test.go:353: "busybox-mount" [0c380575-cdda-4a2b-9f6f-73651f415202] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [0c380575-cdda-4a2b-9f6f-73651f415202] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [0c380575-cdda-4a2b-9f6f-73651f415202] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.003944087s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-853944 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-853944 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2952615462/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (5.60s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 service list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 service list -o json
functional_test.go:1504: Took "526.327937ms" to run "out/minikube-linux-amd64 -p functional-853944 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30577
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30577
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-853944 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
localhost/minikube-local-cache-test:functional-853944
localhost/kicbase/echo-server:functional-853944
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-853944 image ls --format short --alsologtostderr:
I1212 19:40:22.068219   68175 out.go:360] Setting OutFile to fd 1 ...
I1212 19:40:22.068600   68175 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:40:22.068609   68175 out.go:374] Setting ErrFile to fd 2...
I1212 19:40:22.068615   68175 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:40:22.068947   68175 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
I1212 19:40:22.072180   68175 config.go:182] Loaded profile config "functional-853944": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 19:40:22.072401   68175 config.go:182] Loaded profile config "functional-853944": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 19:40:22.073446   68175 cli_runner.go:164] Run: docker container inspect functional-853944 --format={{.State.Status}}
I1212 19:40:22.109415   68175 ssh_runner.go:195] Run: systemctl --version
I1212 19:40:22.109479   68175 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-853944
I1212 19:40:22.137575   68175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/functional-853944/id_rsa Username:docker}
I1212 19:40:22.255359   68175 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-853944 image ls --format yaml --alsologtostderr:
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: db5de78114ee9f8262989599de7342156cdde8c5f01d1b76b1ee760d389d9922
repoDigests:
- localhost/minikube-local-cache-test@sha256:fb3768d6fabfe22de98ccddf9853d402081275abc3cb9ac8c9d5274340dd22b0
repoTags:
- localhost/minikube-local-cache-test:functional-853944
size: "3330"
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
- public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55156597"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79193994"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: 8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
- registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "71977881"
- id: aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
- registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "90819569"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
- registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "76872535"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
- registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5b
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "52747095"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-853944
size: "4944818"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-853944 image ls --format yaml --alsologtostderr:
I1212 19:40:22.061542   68176 out.go:360] Setting OutFile to fd 1 ...
I1212 19:40:22.062052   68176 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:40:22.062101   68176 out.go:374] Setting ErrFile to fd 2...
I1212 19:40:22.062125   68176 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:40:22.062460   68176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
I1212 19:40:22.063193   68176 config.go:182] Loaded profile config "functional-853944": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 19:40:22.063340   68176 config.go:182] Loaded profile config "functional-853944": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 19:40:22.063916   68176 cli_runner.go:164] Run: docker container inspect functional-853944 --format={{.State.Status}}
I1212 19:40:22.094310   68176 ssh_runner.go:195] Run: systemctl --version
I1212 19:40:22.094449   68176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-853944
I1212 19:40:22.122963   68176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/functional-853944/id_rsa Username:docker}
I1212 19:40:22.240514   68176 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (6.96s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-853944 ssh pgrep buildkitd: exit status 1 (382.543513ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 image build -t localhost/my-image:functional-853944 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-853944 image build -t localhost/my-image:functional-853944 testdata/build --alsologtostderr: (6.353318436s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-853944 image build -t localhost/my-image:functional-853944 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> d9592236db2
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-853944
--> 174e4d5b51c
Successfully tagged localhost/my-image:functional-853944
174e4d5b51c98fa2355c504066aa5c510213a2718236f2abda8a25226202e35e
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-853944 image build -t localhost/my-image:functional-853944 testdata/build --alsologtostderr:
I1212 19:40:22.469047   68405 out.go:360] Setting OutFile to fd 1 ...
I1212 19:40:22.469157   68405 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:40:22.469165   68405 out.go:374] Setting ErrFile to fd 2...
I1212 19:40:22.469171   68405 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:40:22.469487   68405 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
I1212 19:40:22.470261   68405 config.go:182] Loaded profile config "functional-853944": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 19:40:22.470984   68405 config.go:182] Loaded profile config "functional-853944": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 19:40:22.471485   68405 cli_runner.go:164] Run: docker container inspect functional-853944 --format={{.State.Status}}
I1212 19:40:22.495763   68405 ssh_runner.go:195] Run: systemctl --version
I1212 19:40:22.495816   68405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-853944
I1212 19:40:22.520153   68405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/functional-853944/id_rsa Username:docker}
I1212 19:40:22.630778   68405 build_images.go:162] Building image from path: /tmp/build.1620475222.tar
I1212 19:40:22.630842   68405 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1212 19:40:22.642107   68405 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1620475222.tar
I1212 19:40:22.646996   68405 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1620475222.tar: stat -c "%s %y" /var/lib/minikube/build/build.1620475222.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1620475222.tar': No such file or directory
I1212 19:40:22.647075   68405 ssh_runner.go:362] scp /tmp/build.1620475222.tar --> /var/lib/minikube/build/build.1620475222.tar (3072 bytes)
I1212 19:40:22.670726   68405 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1620475222
I1212 19:40:22.682188   68405 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1620475222 -xf /var/lib/minikube/build/build.1620475222.tar
I1212 19:40:22.692513   68405 crio.go:315] Building image: /var/lib/minikube/build/build.1620475222
I1212 19:40:22.692577   68405 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-853944 /var/lib/minikube/build/build.1620475222 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1212 19:40:28.715217   68405 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-853944 /var/lib/minikube/build/build.1620475222 --cgroup-manager=cgroupfs: (6.022610561s)
I1212 19:40:28.715329   68405 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1620475222
I1212 19:40:28.723142   68405 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1620475222.tar
I1212 19:40:28.730503   68405 build_images.go:218] Built localhost/my-image:functional-853944 from /tmp/build.1620475222.tar
I1212 19:40:28.730532   68405 build_images.go:134] succeeded building to: functional-853944
I1212 19:40:28.730539   68405 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (6.96s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-853944
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 image load --daemon kicbase/echo-server:functional-853944 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (2.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-853944 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2840961011/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-853944 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (303.996997ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 19:40:15.821224    9254 retry.go:31] will retry after 566.385448ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-853944 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2840961011/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-853944 ssh "sudo umount -f /mount-9p": exit status 1 (289.629597ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-853944 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-853944 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2840961011/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (2.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (2.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 image load --daemon kicbase/echo-server:functional-853944 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 image ls
functional_test.go:466: (dbg) Done: out/minikube-linux-amd64 -p functional-853944 image ls: (1.335630434s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (2.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-853944 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3551991631/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-853944 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3551991631/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-853944 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3551991631/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-853944 ssh "findmnt -T" /mount1: exit status 1 (379.579601ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 19:40:17.915440    9254 retry.go:31] will retry after 711.826595ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-853944 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-853944 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3551991631/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-853944 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3551991631/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-853944 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3551991631/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (2.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-853944
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 image load --daemon kicbase/echo-server:functional-853944 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 image save kicbase/echo-server:functional-853944 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 image rm kicbase/echo-server:functional-853944 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-853944
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-853944 image save --daemon kicbase/echo-server:functional-853944 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-853944
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-853944
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-853944
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-853944
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (165.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1212 19:41:11.445694    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:41:39.149116    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:42:32.071553    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/functional-828160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:42:32.077950    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/functional-828160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:42:32.089259    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/functional-828160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:42:32.110584    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/functional-828160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:42:32.151926    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/functional-828160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:42:32.233331    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/functional-828160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:42:32.394775    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/functional-828160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:42:32.716463    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/functional-828160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:42:33.358506    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/functional-828160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:42:34.640696    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/functional-828160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:42:37.202498    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/functional-828160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:42:42.323969    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/functional-828160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:42:52.565675    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/functional-828160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:43:13.047050    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/functional-828160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-833900 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m44.49699526s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (165.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (3.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-833900 kubectl -- rollout status deployment/busybox: (1.830652566s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 kubectl -- exec busybox-7b57f96db7-2fxtx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 kubectl -- exec busybox-7b57f96db7-bx97x -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 kubectl -- exec busybox-7b57f96db7-zgmf5 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 kubectl -- exec busybox-7b57f96db7-2fxtx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 kubectl -- exec busybox-7b57f96db7-bx97x -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 kubectl -- exec busybox-7b57f96db7-zgmf5 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 kubectl -- exec busybox-7b57f96db7-2fxtx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 kubectl -- exec busybox-7b57f96db7-bx97x -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 kubectl -- exec busybox-7b57f96db7-zgmf5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (3.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 kubectl -- exec busybox-7b57f96db7-2fxtx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 kubectl -- exec busybox-7b57f96db7-2fxtx -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 kubectl -- exec busybox-7b57f96db7-bx97x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 kubectl -- exec busybox-7b57f96db7-bx97x -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 kubectl -- exec busybox-7b57f96db7-zgmf5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 kubectl -- exec busybox-7b57f96db7-zgmf5 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (53.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 node add --alsologtostderr -v 5
E1212 19:43:54.009448    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/functional-828160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-833900 node add --alsologtostderr -v 5: (53.020583104s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (53.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-833900 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 cp testdata/cp-test.txt ha-833900:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 ssh -n ha-833900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 cp ha-833900:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile826759145/001/cp-test_ha-833900.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 ssh -n ha-833900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 cp ha-833900:/home/docker/cp-test.txt ha-833900-m02:/home/docker/cp-test_ha-833900_ha-833900-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 ssh -n ha-833900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 ssh -n ha-833900-m02 "sudo cat /home/docker/cp-test_ha-833900_ha-833900-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 cp ha-833900:/home/docker/cp-test.txt ha-833900-m03:/home/docker/cp-test_ha-833900_ha-833900-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 ssh -n ha-833900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 ssh -n ha-833900-m03 "sudo cat /home/docker/cp-test_ha-833900_ha-833900-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 cp ha-833900:/home/docker/cp-test.txt ha-833900-m04:/home/docker/cp-test_ha-833900_ha-833900-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 ssh -n ha-833900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 ssh -n ha-833900-m04 "sudo cat /home/docker/cp-test_ha-833900_ha-833900-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 cp testdata/cp-test.txt ha-833900-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 ssh -n ha-833900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 cp ha-833900-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile826759145/001/cp-test_ha-833900-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 ssh -n ha-833900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 cp ha-833900-m02:/home/docker/cp-test.txt ha-833900:/home/docker/cp-test_ha-833900-m02_ha-833900.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 ssh -n ha-833900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 ssh -n ha-833900 "sudo cat /home/docker/cp-test_ha-833900-m02_ha-833900.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 cp ha-833900-m02:/home/docker/cp-test.txt ha-833900-m03:/home/docker/cp-test_ha-833900-m02_ha-833900-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 ssh -n ha-833900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 ssh -n ha-833900-m03 "sudo cat /home/docker/cp-test_ha-833900-m02_ha-833900-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 cp ha-833900-m02:/home/docker/cp-test.txt ha-833900-m04:/home/docker/cp-test_ha-833900-m02_ha-833900-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 ssh -n ha-833900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 ssh -n ha-833900-m04 "sudo cat /home/docker/cp-test_ha-833900-m02_ha-833900-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 cp testdata/cp-test.txt ha-833900-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 ssh -n ha-833900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 cp ha-833900-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile826759145/001/cp-test_ha-833900-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 ssh -n ha-833900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 cp ha-833900-m03:/home/docker/cp-test.txt ha-833900:/home/docker/cp-test_ha-833900-m03_ha-833900.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 ssh -n ha-833900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 ssh -n ha-833900 "sudo cat /home/docker/cp-test_ha-833900-m03_ha-833900.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 cp ha-833900-m03:/home/docker/cp-test.txt ha-833900-m02:/home/docker/cp-test_ha-833900-m03_ha-833900-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 ssh -n ha-833900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 ssh -n ha-833900-m02 "sudo cat /home/docker/cp-test_ha-833900-m03_ha-833900-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 cp ha-833900-m03:/home/docker/cp-test.txt ha-833900-m04:/home/docker/cp-test_ha-833900-m03_ha-833900-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 ssh -n ha-833900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 ssh -n ha-833900-m04 "sudo cat /home/docker/cp-test_ha-833900-m03_ha-833900-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 cp testdata/cp-test.txt ha-833900-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 ssh -n ha-833900-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 cp ha-833900-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile826759145/001/cp-test_ha-833900-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 ssh -n ha-833900-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 cp ha-833900-m04:/home/docker/cp-test.txt ha-833900:/home/docker/cp-test_ha-833900-m04_ha-833900.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 ssh -n ha-833900-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 ssh -n ha-833900 "sudo cat /home/docker/cp-test_ha-833900-m04_ha-833900.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 cp ha-833900-m04:/home/docker/cp-test.txt ha-833900-m02:/home/docker/cp-test_ha-833900-m04_ha-833900-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 ssh -n ha-833900-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 ssh -n ha-833900-m02 "sudo cat /home/docker/cp-test_ha-833900-m04_ha-833900-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 cp ha-833900-m04:/home/docker/cp-test.txt ha-833900-m03:/home/docker/cp-test_ha-833900-m04_ha-833900-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 ssh -n ha-833900-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 ssh -n ha-833900-m03 "sudo cat /home/docker/cp-test_ha-833900-m04_ha-833900-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 node stop m02 --alsologtostderr -v 5
E1212 19:45:00.423293    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/functional-853944/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:45:00.429702    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/functional-853944/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:45:00.441021    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/functional-853944/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:45:00.462326    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/functional-853944/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:45:00.503676    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/functional-853944/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:45:00.585045    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/functional-853944/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:45:00.746500    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/functional-853944/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:45:01.068144    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/functional-853944/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:45:01.710227    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/functional-853944/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-833900 node stop m02 --alsologtostderr -v 5: (18.331288916s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 status --alsologtostderr -v 5
E1212 19:45:02.992023    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/functional-853944/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-833900 status --alsologtostderr -v 5: exit status 7 (663.142354ms)

                                                
                                                
-- stdout --
	ha-833900
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-833900-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-833900-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-833900-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 19:45:02.415924   89111 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:45:02.416049   89111 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:45:02.416061   89111 out.go:374] Setting ErrFile to fd 2...
	I1212 19:45:02.416067   89111 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:45:02.416357   89111 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 19:45:02.416574   89111 out.go:368] Setting JSON to false
	I1212 19:45:02.416600   89111 mustload.go:66] Loading cluster: ha-833900
	I1212 19:45:02.416681   89111 notify.go:221] Checking for updates...
	I1212 19:45:02.417007   89111 config.go:182] Loaded profile config "ha-833900": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:45:02.417022   89111 status.go:174] checking status of ha-833900 ...
	I1212 19:45:02.417524   89111 cli_runner.go:164] Run: docker container inspect ha-833900 --format={{.State.Status}}
	I1212 19:45:02.436763   89111 status.go:371] ha-833900 host status = "Running" (err=<nil>)
	I1212 19:45:02.436791   89111 host.go:66] Checking if "ha-833900" exists ...
	I1212 19:45:02.437034   89111 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-833900
	I1212 19:45:02.454531   89111 host.go:66] Checking if "ha-833900" exists ...
	I1212 19:45:02.454794   89111 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 19:45:02.454841   89111 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-833900
	I1212 19:45:02.472468   89111 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32789 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/ha-833900/id_rsa Username:docker}
	I1212 19:45:02.563467   89111 ssh_runner.go:195] Run: systemctl --version
	I1212 19:45:02.569777   89111 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 19:45:02.583126   89111 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:45:02.640948   89111 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:70 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-12 19:45:02.631193947 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 19:45:02.641484   89111 kubeconfig.go:125] found "ha-833900" server: "https://192.168.49.254:8443"
	I1212 19:45:02.641516   89111 api_server.go:166] Checking apiserver status ...
	I1212 19:45:02.641559   89111 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:45:02.652627   89111 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1241/cgroup
	W1212 19:45:02.660652   89111 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1241/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1212 19:45:02.660690   89111 ssh_runner.go:195] Run: ls
	I1212 19:45:02.664300   89111 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1212 19:45:02.669625   89111 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1212 19:45:02.669648   89111 status.go:463] ha-833900 apiserver status = Running (err=<nil>)
	I1212 19:45:02.669660   89111 status.go:176] ha-833900 status: &{Name:ha-833900 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 19:45:02.669680   89111 status.go:174] checking status of ha-833900-m02 ...
	I1212 19:45:02.669977   89111 cli_runner.go:164] Run: docker container inspect ha-833900-m02 --format={{.State.Status}}
	I1212 19:45:02.687139   89111 status.go:371] ha-833900-m02 host status = "Stopped" (err=<nil>)
	I1212 19:45:02.687158   89111 status.go:384] host is not running, skipping remaining checks
	I1212 19:45:02.687169   89111 status.go:176] ha-833900-m02 status: &{Name:ha-833900-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 19:45:02.687184   89111 status.go:174] checking status of ha-833900-m03 ...
	I1212 19:45:02.687431   89111 cli_runner.go:164] Run: docker container inspect ha-833900-m03 --format={{.State.Status}}
	I1212 19:45:02.703958   89111 status.go:371] ha-833900-m03 host status = "Running" (err=<nil>)
	I1212 19:45:02.703975   89111 host.go:66] Checking if "ha-833900-m03" exists ...
	I1212 19:45:02.704178   89111 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-833900-m03
	I1212 19:45:02.720493   89111 host.go:66] Checking if "ha-833900-m03" exists ...
	I1212 19:45:02.720739   89111 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 19:45:02.720774   89111 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-833900-m03
	I1212 19:45:02.736558   89111 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32799 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/ha-833900-m03/id_rsa Username:docker}
	I1212 19:45:02.827198   89111 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 19:45:02.839843   89111 kubeconfig.go:125] found "ha-833900" server: "https://192.168.49.254:8443"
	I1212 19:45:02.839868   89111 api_server.go:166] Checking apiserver status ...
	I1212 19:45:02.839902   89111 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:45:02.850083   89111 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1170/cgroup
	W1212 19:45:02.857631   89111 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1170/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1212 19:45:02.857669   89111 ssh_runner.go:195] Run: ls
	I1212 19:45:02.860972   89111 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1212 19:45:02.864828   89111 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1212 19:45:02.864850   89111 status.go:463] ha-833900-m03 apiserver status = Running (err=<nil>)
	I1212 19:45:02.864860   89111 status.go:176] ha-833900-m03 status: &{Name:ha-833900-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 19:45:02.864876   89111 status.go:174] checking status of ha-833900-m04 ...
	I1212 19:45:02.865117   89111 cli_runner.go:164] Run: docker container inspect ha-833900-m04 --format={{.State.Status}}
	I1212 19:45:02.884374   89111 status.go:371] ha-833900-m04 host status = "Running" (err=<nil>)
	I1212 19:45:02.884394   89111 host.go:66] Checking if "ha-833900-m04" exists ...
	I1212 19:45:02.884624   89111 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-833900-m04
	I1212 19:45:02.902181   89111 host.go:66] Checking if "ha-833900-m04" exists ...
	I1212 19:45:02.902456   89111 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 19:45:02.902495   89111 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-833900-m04
	I1212 19:45:02.917715   89111 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32804 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/ha-833900-m04/id_rsa Username:docker}
	I1212 19:45:03.008853   89111 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 19:45:03.020798   89111 status.go:176] ha-833900-m04 status: &{Name:ha-833900-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (19.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (8.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 node start m02 --alsologtostderr -v 5
E1212 19:45:05.553417    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/functional-853944/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:45:10.675512    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/functional-853944/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-833900 node start m02 --alsologtostderr -v 5: (7.60493805s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (8.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (190.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 stop --alsologtostderr -v 5
E1212 19:45:15.930788    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/functional-828160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:45:20.916994    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/functional-853944/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:45:41.398357    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/functional-853944/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-833900 stop --alsologtostderr -v 5: (33.290783276s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 start --wait true --alsologtostderr -v 5
E1212 19:46:11.445221    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:46:22.359672    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/functional-853944/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:47:32.071307    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/functional-828160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:47:44.281072    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/functional-853944/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:47:59.774251    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/functional-828160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-833900 start --wait true --alsologtostderr -v 5: (2m37.527488739s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (190.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (32.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-833900 node delete m03 --alsologtostderr -v 5: (31.26354913s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (32.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-833900 stop --alsologtostderr -v 5: (35.895232203s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-833900 status --alsologtostderr -v 5: exit status 7 (111.67658ms)

                                                
                                                
-- stdout --
	ha-833900
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-833900-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-833900-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 19:49:32.605604  103717 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:49:32.605896  103717 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:49:32.605907  103717 out.go:374] Setting ErrFile to fd 2...
	I1212 19:49:32.605911  103717 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:49:32.606165  103717 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 19:49:32.606384  103717 out.go:368] Setting JSON to false
	I1212 19:49:32.606509  103717 mustload.go:66] Loading cluster: ha-833900
	I1212 19:49:32.606638  103717 notify.go:221] Checking for updates...
	I1212 19:49:32.607017  103717 config.go:182] Loaded profile config "ha-833900": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:49:32.607031  103717 status.go:174] checking status of ha-833900 ...
	I1212 19:49:32.607578  103717 cli_runner.go:164] Run: docker container inspect ha-833900 --format={{.State.Status}}
	I1212 19:49:32.626138  103717 status.go:371] ha-833900 host status = "Stopped" (err=<nil>)
	I1212 19:49:32.626165  103717 status.go:384] host is not running, skipping remaining checks
	I1212 19:49:32.626174  103717 status.go:176] ha-833900 status: &{Name:ha-833900 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 19:49:32.626203  103717 status.go:174] checking status of ha-833900-m02 ...
	I1212 19:49:32.626442  103717 cli_runner.go:164] Run: docker container inspect ha-833900-m02 --format={{.State.Status}}
	I1212 19:49:32.643601  103717 status.go:371] ha-833900-m02 host status = "Stopped" (err=<nil>)
	I1212 19:49:32.643620  103717 status.go:384] host is not running, skipping remaining checks
	I1212 19:49:32.643625  103717 status.go:176] ha-833900-m02 status: &{Name:ha-833900-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 19:49:32.643640  103717 status.go:174] checking status of ha-833900-m04 ...
	I1212 19:49:32.643831  103717 cli_runner.go:164] Run: docker container inspect ha-833900-m04 --format={{.State.Status}}
	I1212 19:49:32.659645  103717 status.go:371] ha-833900-m04 host status = "Stopped" (err=<nil>)
	I1212 19:49:32.659675  103717 status.go:384] host is not running, skipping remaining checks
	I1212 19:49:32.659692  103717 status.go:176] ha-833900-m04 status: &{Name:ha-833900-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (56.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1212 19:50:00.416779    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/functional-853944/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:50:28.123329    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/functional-853944/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-833900 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (55.743703015s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (56.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (53.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 node add --control-plane --alsologtostderr -v 5
E1212 19:51:11.445561    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-833900 node add --control-plane --alsologtostderr -v 5: (52.578732809s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-833900 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (53.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                    
x
+
TestJSONOutput/start/Command (39.05s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-186519 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-186519 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (39.046991946s)
--- PASS: TestJSONOutput/start/Command (39.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.13s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-186519 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-186519 --output=json --user=testUser: (6.133850579s)
--- PASS: TestJSONOutput/stop/Command (6.13s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-470159 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-470159 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (75.264764ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"17d9e0c8-5d6c-4d31-a8c0-c556f9a9f15f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-470159] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4b8f366f-77a8-442a-8227-a261505ebaa6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22112"}}
	{"specversion":"1.0","id":"c1aa156e-e3ff-48d8-82c6-bbfbfea1841d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e9384f9e-fe97-4200-9e0a-6fbd9b218c60","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22112-5703/kubeconfig"}}
	{"specversion":"1.0","id":"fe9a598e-9b86-4cb5-befc-d101016cb1ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-5703/.minikube"}}
	{"specversion":"1.0","id":"b1497745-225a-4284-90b5-9e09efb7f5dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"422a5848-ab6a-46f7-bf41-cd72334cbfdc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"67890ac4-db3d-4fde-bbe3-64ec14c65c72","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-470159" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-470159
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (28.32s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-565595 --network=
E1212 19:52:32.070883    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/functional-828160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:52:34.512646    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-565595 --network=: (26.212543199s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-565595" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-565595
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-565595: (2.085929634s)
--- PASS: TestKicCustomNetwork/create_custom_network (28.32s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.63s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-623186 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-623186 --network=bridge: (22.652338554s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-623186" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-623186
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-623186: (1.956057358s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.63s)

                                                
                                    
x
+
TestKicExistingNetwork (24.78s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1212 19:53:19.314926    9254 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1212 19:53:19.330009    9254 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1212 19:53:19.330067    9254 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1212 19:53:19.330083    9254 cli_runner.go:164] Run: docker network inspect existing-network
W1212 19:53:19.345693    9254 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1212 19:53:19.345719    9254 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1212 19:53:19.345739    9254 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1212 19:53:19.345866    9254 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1212 19:53:19.362680    9254 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-74442dadd84e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:ff:80:da:a9:72} reservation:<nil>}
I1212 19:53:19.362994    9254 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00524b000}
I1212 19:53:19.363024    9254 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1212 19:53:19.363069    9254 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1212 19:53:19.407843    9254 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-866636 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-866636 --network=existing-network: (22.705416188s)
helpers_test.go:176: Cleaning up "existing-network-866636" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-866636
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-866636: (1.94143393s)
I1212 19:53:44.073106    9254 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (24.78s)

                                                
                                    
x
+
TestKicCustomSubnet (24.25s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-079751 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-079751 --subnet=192.168.60.0/24: (22.133600802s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-079751 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-079751" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-079751
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-079751: (2.096390073s)
--- PASS: TestKicCustomSubnet (24.25s)

                                                
                                    
x
+
TestKicStaticIP (22.01s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-577589 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-577589 --static-ip=192.168.200.200: (19.769118985s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-577589 ip
helpers_test.go:176: Cleaning up "static-ip-577589" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-577589
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-577589: (2.098318545s)
--- PASS: TestKicStaticIP (22.01s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (46.33s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-356513 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-356513 --driver=docker  --container-runtime=crio: (21.358737476s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-358636 --driver=docker  --container-runtime=crio
E1212 19:55:00.416419    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/functional-853944/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-358636 --driver=docker  --container-runtime=crio: (19.159565636s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-356513
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-358636
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-358636" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-358636
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p second-358636: (2.339481608s)
helpers_test.go:176: Cleaning up "first-356513" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-356513
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p first-356513: (2.293875652s)
--- PASS: TestMinikubeProfile (46.33s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (4.69s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-049357 --memory=3072 --mount-string /tmp/TestMountStartserial1410448929/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-049357 --memory=3072 --mount-string /tmp/TestMountStartserial1410448929/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (3.692408776s)
--- PASS: TestMountStart/serial/StartWithMountFirst (4.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-049357 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.56s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-061231 --memory=3072 --mount-string /tmp/TestMountStartserial1410448929/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-061231 --memory=3072 --mount-string /tmp/TestMountStartserial1410448929/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.561077292s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.56s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-061231 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-049357 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-049357 --alsologtostderr -v=5: (1.648077436s)
--- PASS: TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-061231 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-061231
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-061231: (1.235488075s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.22s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-061231
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-061231: (6.222198577s)
--- PASS: TestMountStart/serial/RestartStopped (7.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-061231 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (64.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-322761 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1212 19:56:11.445841    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-322761 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m4.113629246s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-322761 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (64.57s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (2.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-322761 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-322761 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-322761 -- rollout status deployment/busybox: (1.411376639s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-322761 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-322761 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-322761 -- exec busybox-7b57f96db7-gkjhg -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-322761 -- exec busybox-7b57f96db7-kwrvl -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-322761 -- exec busybox-7b57f96db7-gkjhg -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-322761 -- exec busybox-7b57f96db7-kwrvl -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-322761 -- exec busybox-7b57f96db7-gkjhg -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-322761 -- exec busybox-7b57f96db7-kwrvl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (2.79s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-322761 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-322761 -- exec busybox-7b57f96db7-gkjhg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-322761 -- exec busybox-7b57f96db7-gkjhg -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-322761 -- exec busybox-7b57f96db7-kwrvl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-322761 -- exec busybox-7b57f96db7-kwrvl -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.67s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (24.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-322761 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-322761 -v=5 --alsologtostderr: (24.091297664s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-322761 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (24.70s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-322761 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.62s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-322761 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-322761 cp testdata/cp-test.txt multinode-322761:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-322761 ssh -n multinode-322761 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-322761 cp multinode-322761:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile15799252/001/cp-test_multinode-322761.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-322761 ssh -n multinode-322761 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-322761 cp multinode-322761:/home/docker/cp-test.txt multinode-322761-m02:/home/docker/cp-test_multinode-322761_multinode-322761-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-322761 ssh -n multinode-322761 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-322761 ssh -n multinode-322761-m02 "sudo cat /home/docker/cp-test_multinode-322761_multinode-322761-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-322761 cp multinode-322761:/home/docker/cp-test.txt multinode-322761-m03:/home/docker/cp-test_multinode-322761_multinode-322761-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-322761 ssh -n multinode-322761 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-322761 ssh -n multinode-322761-m03 "sudo cat /home/docker/cp-test_multinode-322761_multinode-322761-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-322761 cp testdata/cp-test.txt multinode-322761-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-322761 ssh -n multinode-322761-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-322761 cp multinode-322761-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile15799252/001/cp-test_multinode-322761-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-322761 ssh -n multinode-322761-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-322761 cp multinode-322761-m02:/home/docker/cp-test.txt multinode-322761:/home/docker/cp-test_multinode-322761-m02_multinode-322761.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-322761 ssh -n multinode-322761-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-322761 ssh -n multinode-322761 "sudo cat /home/docker/cp-test_multinode-322761-m02_multinode-322761.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-322761 cp multinode-322761-m02:/home/docker/cp-test.txt multinode-322761-m03:/home/docker/cp-test_multinode-322761-m02_multinode-322761-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-322761 ssh -n multinode-322761-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-322761 ssh -n multinode-322761-m03 "sudo cat /home/docker/cp-test_multinode-322761-m02_multinode-322761-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-322761 cp testdata/cp-test.txt multinode-322761-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-322761 ssh -n multinode-322761-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-322761 cp multinode-322761-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile15799252/001/cp-test_multinode-322761-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-322761 ssh -n multinode-322761-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-322761 cp multinode-322761-m03:/home/docker/cp-test.txt multinode-322761:/home/docker/cp-test_multinode-322761-m03_multinode-322761.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-322761 ssh -n multinode-322761-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-322761 ssh -n multinode-322761 "sudo cat /home/docker/cp-test_multinode-322761-m03_multinode-322761.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-322761 cp multinode-322761-m03:/home/docker/cp-test.txt multinode-322761-m02:/home/docker/cp-test_multinode-322761-m03_multinode-322761-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-322761 ssh -n multinode-322761-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-322761 ssh -n multinode-322761-m02 "sudo cat /home/docker/cp-test_multinode-322761-m03_multinode-322761-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.32s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-322761 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-322761 node stop m03: (1.253672282s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-322761 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-322761 status: exit status 7 (467.399584ms)

                                                
                                                
-- stdout --
	multinode-322761
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-322761-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-322761-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-322761 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-322761 status --alsologtostderr: exit status 7 (464.752172ms)

                                                
                                                
-- stdout --
	multinode-322761
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-322761-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-322761-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 19:57:26.446222  163637 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:57:26.446569  163637 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:57:26.446579  163637 out.go:374] Setting ErrFile to fd 2...
	I1212 19:57:26.446583  163637 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:57:26.446805  163637 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 19:57:26.446962  163637 out.go:368] Setting JSON to false
	I1212 19:57:26.446984  163637 mustload.go:66] Loading cluster: multinode-322761
	I1212 19:57:26.447103  163637 notify.go:221] Checking for updates...
	I1212 19:57:26.447329  163637 config.go:182] Loaded profile config "multinode-322761": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:57:26.447347  163637 status.go:174] checking status of multinode-322761 ...
	I1212 19:57:26.447719  163637 cli_runner.go:164] Run: docker container inspect multinode-322761 --format={{.State.Status}}
	I1212 19:57:26.465338  163637 status.go:371] multinode-322761 host status = "Running" (err=<nil>)
	I1212 19:57:26.465368  163637 host.go:66] Checking if "multinode-322761" exists ...
	I1212 19:57:26.465611  163637 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-322761
	I1212 19:57:26.482723  163637 host.go:66] Checking if "multinode-322761" exists ...
	I1212 19:57:26.482989  163637 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 19:57:26.483028  163637 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-322761
	I1212 19:57:26.500391  163637 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32909 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/multinode-322761/id_rsa Username:docker}
	I1212 19:57:26.591157  163637 ssh_runner.go:195] Run: systemctl --version
	I1212 19:57:26.597045  163637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 19:57:26.608448  163637 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:57:26.659860  163637 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-12 19:57:26.650919686 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 19:57:26.660397  163637 kubeconfig.go:125] found "multinode-322761" server: "https://192.168.67.2:8443"
	I1212 19:57:26.660428  163637 api_server.go:166] Checking apiserver status ...
	I1212 19:57:26.660463  163637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:26.671301  163637 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1249/cgroup
	W1212 19:57:26.678970  163637 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1249/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1212 19:57:26.679017  163637 ssh_runner.go:195] Run: ls
	I1212 19:57:26.682504  163637 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1212 19:57:26.686413  163637 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1212 19:57:26.686431  163637 status.go:463] multinode-322761 apiserver status = Running (err=<nil>)
	I1212 19:57:26.686439  163637 status.go:176] multinode-322761 status: &{Name:multinode-322761 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 19:57:26.686452  163637 status.go:174] checking status of multinode-322761-m02 ...
	I1212 19:57:26.686659  163637 cli_runner.go:164] Run: docker container inspect multinode-322761-m02 --format={{.State.Status}}
	I1212 19:57:26.703145  163637 status.go:371] multinode-322761-m02 host status = "Running" (err=<nil>)
	I1212 19:57:26.703162  163637 host.go:66] Checking if "multinode-322761-m02" exists ...
	I1212 19:57:26.703430  163637 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-322761-m02
	I1212 19:57:26.719341  163637 host.go:66] Checking if "multinode-322761-m02" exists ...
	I1212 19:57:26.719566  163637 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 19:57:26.719597  163637 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-322761-m02
	I1212 19:57:26.735948  163637 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32914 SSHKeyPath:/home/jenkins/minikube-integration/22112-5703/.minikube/machines/multinode-322761-m02/id_rsa Username:docker}
	I1212 19:57:26.825775  163637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 19:57:26.837223  163637 status.go:176] multinode-322761-m02 status: &{Name:multinode-322761-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1212 19:57:26.837257  163637 status.go:174] checking status of multinode-322761-m03 ...
	I1212 19:57:26.837536  163637 cli_runner.go:164] Run: docker container inspect multinode-322761-m03 --format={{.State.Status}}
	I1212 19:57:26.854731  163637 status.go:371] multinode-322761-m03 host status = "Stopped" (err=<nil>)
	I1212 19:57:26.854749  163637 status.go:384] host is not running, skipping remaining checks
	I1212 19:57:26.854755  163637 status.go:176] multinode-322761-m03 status: &{Name:multinode-322761-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.19s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-322761 node start m03 -v=5 --alsologtostderr
E1212 19:57:32.071421    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/functional-828160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-322761 node start m03 -v=5 --alsologtostderr: (6.36385265s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-322761 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.02s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (56.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-322761
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-322761
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-322761: (29.414878071s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-322761 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-322761 --wait=true -v=5 --alsologtostderr: (27.046566212s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-322761
--- PASS: TestMultiNode/serial/RestartKeepsNodes (56.58s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-322761 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-322761 node delete m03: (4.320755826s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-322761 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.88s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (28.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-322761 stop
E1212 19:58:55.137824    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/functional-828160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-322761 stop: (28.246523419s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-322761 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-322761 status: exit status 7 (96.077701ms)

                                                
                                                
-- stdout --
	multinode-322761
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-322761-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-322761 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-322761 status --alsologtostderr: exit status 7 (94.734011ms)

                                                
                                                
-- stdout --
	multinode-322761
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-322761-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 19:59:03.737180  173100 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:59:03.737289  173100 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:59:03.737301  173100 out.go:374] Setting ErrFile to fd 2...
	I1212 19:59:03.737307  173100 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:59:03.737488  173100 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 19:59:03.737634  173100 out.go:368] Setting JSON to false
	I1212 19:59:03.737656  173100 mustload.go:66] Loading cluster: multinode-322761
	I1212 19:59:03.737957  173100 notify.go:221] Checking for updates...
	I1212 19:59:03.738582  173100 config.go:182] Loaded profile config "multinode-322761": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:59:03.738610  173100 status.go:174] checking status of multinode-322761 ...
	I1212 19:59:03.739450  173100 cli_runner.go:164] Run: docker container inspect multinode-322761 --format={{.State.Status}}
	I1212 19:59:03.760290  173100 status.go:371] multinode-322761 host status = "Stopped" (err=<nil>)
	I1212 19:59:03.760309  173100 status.go:384] host is not running, skipping remaining checks
	I1212 19:59:03.760316  173100 status.go:176] multinode-322761 status: &{Name:multinode-322761 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 19:59:03.760355  173100 status.go:174] checking status of multinode-322761-m02 ...
	I1212 19:59:03.760607  173100 cli_runner.go:164] Run: docker container inspect multinode-322761-m02 --format={{.State.Status}}
	I1212 19:59:03.777085  173100 status.go:371] multinode-322761-m02 host status = "Stopped" (err=<nil>)
	I1212 19:59:03.777099  173100 status.go:384] host is not running, skipping remaining checks
	I1212 19:59:03.777104  173100 status.go:176] multinode-322761-m02 status: &{Name:multinode-322761-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (28.44s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (25.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-322761 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-322761 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (24.719720998s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-322761 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (25.29s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (21.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-322761
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-322761-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-322761-m02 --driver=docker  --container-runtime=crio: exit status 14 (71.272178ms)

                                                
                                                
-- stdout --
	* [multinode-322761-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22112-5703/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-5703/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-322761-m02' is duplicated with machine name 'multinode-322761-m02' in profile 'multinode-322761'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-322761-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-322761-m03 --driver=docker  --container-runtime=crio: (19.040865201s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-322761
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-322761: exit status 80 (275.501432ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-322761 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-322761-m03 already exists in multinode-322761-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-322761-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-322761-m03: (2.287882284s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (21.73s)

                                                
                                    
x
+
TestPreload (82.23s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-925787 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
E1212 20:00:00.416943    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/functional-853944/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-925787 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (44.426679133s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-925787 image pull gcr.io/k8s-minikube/busybox
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-925787
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-925787: (7.956179579s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-925787 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1212 20:01:11.446082    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-925787 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (26.447659011s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-925787 image list
helpers_test.go:176: Cleaning up "test-preload-925787" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-925787
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-925787: (2.306978434s)
--- PASS: TestPreload (82.23s)

                                                
                                    
x
+
TestScheduledStopUnix (98.06s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-349376 --memory=3072 --driver=docker  --container-runtime=crio
E1212 20:01:23.485881    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/functional-853944/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-349376 --memory=3072 --driver=docker  --container-runtime=crio: (21.434791121s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-349376 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1212 20:01:38.614711  190032 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:01:38.614962  190032 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:01:38.614971  190032 out.go:374] Setting ErrFile to fd 2...
	I1212 20:01:38.614975  190032 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:01:38.615132  190032 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 20:01:38.615351  190032 out.go:368] Setting JSON to false
	I1212 20:01:38.615431  190032 mustload.go:66] Loading cluster: scheduled-stop-349376
	I1212 20:01:38.615739  190032 config.go:182] Loaded profile config "scheduled-stop-349376": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:01:38.615802  190032 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/scheduled-stop-349376/config.json ...
	I1212 20:01:38.615967  190032 mustload.go:66] Loading cluster: scheduled-stop-349376
	I1212 20:01:38.616060  190032 config.go:182] Loaded profile config "scheduled-stop-349376": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-349376 -n scheduled-stop-349376
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-349376 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1212 20:01:38.982466  190186 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:01:38.982568  190186 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:01:38.982577  190186 out.go:374] Setting ErrFile to fd 2...
	I1212 20:01:38.982581  190186 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:01:38.982742  190186 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 20:01:38.982967  190186 out.go:368] Setting JSON to false
	I1212 20:01:38.983134  190186 daemonize_unix.go:73] killing process 190066 as it is an old scheduled stop
	I1212 20:01:38.983240  190186 mustload.go:66] Loading cluster: scheduled-stop-349376
	I1212 20:01:38.983553  190186 config.go:182] Loaded profile config "scheduled-stop-349376": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:01:38.983623  190186 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/scheduled-stop-349376/config.json ...
	I1212 20:01:38.983790  190186 mustload.go:66] Loading cluster: scheduled-stop-349376
	I1212 20:01:38.983875  190186 config.go:182] Loaded profile config "scheduled-stop-349376": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1212 20:01:38.987985    9254 retry.go:31] will retry after 90.645µs: open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/scheduled-stop-349376/pid: no such file or directory
I1212 20:01:38.989157    9254 retry.go:31] will retry after 201.59µs: open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/scheduled-stop-349376/pid: no such file or directory
I1212 20:01:38.990326    9254 retry.go:31] will retry after 257.975µs: open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/scheduled-stop-349376/pid: no such file or directory
I1212 20:01:38.991476    9254 retry.go:31] will retry after 345.604µs: open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/scheduled-stop-349376/pid: no such file or directory
I1212 20:01:38.992601    9254 retry.go:31] will retry after 748.537µs: open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/scheduled-stop-349376/pid: no such file or directory
I1212 20:01:38.993717    9254 retry.go:31] will retry after 903.172µs: open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/scheduled-stop-349376/pid: no such file or directory
I1212 20:01:38.994835    9254 retry.go:31] will retry after 1.203926ms: open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/scheduled-stop-349376/pid: no such file or directory
I1212 20:01:38.997036    9254 retry.go:31] will retry after 1.205397ms: open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/scheduled-stop-349376/pid: no such file or directory
I1212 20:01:38.999252    9254 retry.go:31] will retry after 3.594677ms: open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/scheduled-stop-349376/pid: no such file or directory
I1212 20:01:39.003470    9254 retry.go:31] will retry after 2.360394ms: open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/scheduled-stop-349376/pid: no such file or directory
I1212 20:01:39.006681    9254 retry.go:31] will retry after 6.085651ms: open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/scheduled-stop-349376/pid: no such file or directory
I1212 20:01:39.012810    9254 retry.go:31] will retry after 4.667187ms: open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/scheduled-stop-349376/pid: no such file or directory
I1212 20:01:39.017990    9254 retry.go:31] will retry after 10.710524ms: open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/scheduled-stop-349376/pid: no such file or directory
I1212 20:01:39.029378    9254 retry.go:31] will retry after 23.859835ms: open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/scheduled-stop-349376/pid: no such file or directory
I1212 20:01:39.053599    9254 retry.go:31] will retry after 36.481947ms: open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/scheduled-stop-349376/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-349376 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-349376 -n scheduled-stop-349376
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-349376
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-349376 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1212 20:02:04.803086  190829 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:02:04.803333  190829 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:02:04.803341  190829 out.go:374] Setting ErrFile to fd 2...
	I1212 20:02:04.803346  190829 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:02:04.803558  190829 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 20:02:04.803797  190829 out.go:368] Setting JSON to false
	I1212 20:02:04.803868  190829 mustload.go:66] Loading cluster: scheduled-stop-349376
	I1212 20:02:04.804198  190829 config.go:182] Loaded profile config "scheduled-stop-349376": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:02:04.804261  190829 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/scheduled-stop-349376/config.json ...
	I1212 20:02:04.804445  190829 mustload.go:66] Loading cluster: scheduled-stop-349376
	I1212 20:02:04.804534  190829 config.go:182] Loaded profile config "scheduled-stop-349376": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1212 20:02:32.070658    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/functional-828160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-349376
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-349376: exit status 7 (76.315526ms)

                                                
                                                
-- stdout --
	scheduled-stop-349376
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-349376 -n scheduled-stop-349376
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-349376 -n scheduled-stop-349376: exit status 7 (74.96027ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-349376" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-349376
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-349376: (5.223439277s)
--- PASS: TestScheduledStopUnix (98.06s)

                                                
                                    
x
+
TestInsufficientStorage (8.58s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-151805 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-151805 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (6.149017035s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"48ac7b72-8f05-4ec9-b3b5-5e0064879eb8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-151805] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4b1e7baf-4b7e-434b-907a-5fa6a22a6708","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22112"}}
	{"specversion":"1.0","id":"298f2a2a-f73b-437a-b061-cae3d9581f48","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a0ed3422-9e1a-4222-8204-0502f1ce9546","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22112-5703/kubeconfig"}}
	{"specversion":"1.0","id":"873438ae-fcdc-413c-903b-3aa23f4a3533","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-5703/.minikube"}}
	{"specversion":"1.0","id":"c263131d-8d22-4e87-86b6-f079902a5938","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"aad76b77-2131-44be-861f-fbd29c9fd24c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e95bf179-27b3-4547-a7bf-f8af8e84dd86","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"75d15e05-ae4e-4ec3-9d17-b2b04206cbd3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"02829d0e-5884-4412-9cac-3e107fffe4f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"165670c5-de9a-4b88-849b-996d1df308b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"20e62dc9-3e9c-49c1-971c-323212b875ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-151805\" primary control-plane node in \"insufficient-storage-151805\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"61d9d687-0436-4905-a9d0-f0229fd80264","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1765505794-22112 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"0f033d05-a8c4-40cf-ac4f-46421cc7fd46","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"ad8a11cd-12c5-4524-84cf-41b643078cfa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-151805 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-151805 --output=json --layout=cluster: exit status 7 (280.22425ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-151805","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-151805","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 20:03:01.605007  193378 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-151805" does not appear in /home/jenkins/minikube-integration/22112-5703/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-151805 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-151805 --output=json --layout=cluster: exit status 7 (274.753768ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-151805","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-151805","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 20:03:01.880575  193490 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-151805" does not appear in /home/jenkins/minikube-integration/22112-5703/kubeconfig
	E1212 20:03:01.890524  193490 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/insufficient-storage-151805/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-151805" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-151805
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-151805: (1.871286493s)
--- PASS: TestInsufficientStorage (8.58s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (315.15s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.3740622289 start -p running-upgrade-569692 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.3740622289 start -p running-upgrade-569692 --memory=3072 --vm-driver=docker  --container-runtime=crio: (43.425497885s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-569692 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-569692 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m28.801772471s)
helpers_test.go:176: Cleaning up "running-upgrade-569692" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-569692
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-569692: (2.327958743s)
--- PASS: TestRunningBinaryUpgrade (315.15s)

                                                
                                    
x
+
TestKubernetesUpgrade (304.88s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-991615 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-991615 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.360704811s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-991615
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-991615: (11.886950994s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-991615 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-991615 status --format={{.Host}}: exit status 7 (74.792955ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-991615 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1212 20:06:11.445937    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:07:32.070821    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/functional-828160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-991615 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m20.962423294s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-991615 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-991615 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-991615 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (77.845733ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-991615] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22112-5703/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-5703/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-991615
	    minikube start -p kubernetes-upgrade-991615 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9916152 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-991615 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-991615 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-991615 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (7.067995775s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-991615" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-991615
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-991615: (2.386920406s)
--- PASS: TestKubernetesUpgrade (304.88s)

                                                
                                    
x
+
TestMissingContainerUpgrade (97.33s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.1904289172 start -p missing-upgrade-551899 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.1904289172 start -p missing-upgrade-551899 --memory=3072 --driver=docker  --container-runtime=crio: (43.902730287s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-551899
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-551899: (10.426894899s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-551899
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-551899 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-551899 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (39.744837515s)
helpers_test.go:176: Cleaning up "missing-upgrade-551899" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-551899
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-551899: (2.41106047s)
--- PASS: TestMissingContainerUpgrade (97.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-562130 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-562130 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (80.771792ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-562130] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22112-5703/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-5703/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-562130 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-562130 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (42.137631879s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-562130 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (23.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-562130 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-562130 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (21.055215966s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-562130 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-562130 status -o json: exit status 2 (297.413286ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-562130","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-562130
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-562130: (1.97848239s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (23.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-562130 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-562130 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (5.76062381s)
--- PASS: TestNoKubernetes/serial/Start (5.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22112-5703/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-562130 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-562130 "sudo systemctl is-active --quiet service kubelet": exit status 1 (258.019357ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (15.553964244s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:204: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (15.468171851s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-789448 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-789448 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (165.682306ms)

                                                
                                                
-- stdout --
	* [false-789448] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22112-5703/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-5703/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:04:38.366424  217656 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:04:38.366757  217656 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:04:38.366771  217656 out.go:374] Setting ErrFile to fd 2...
	I1212 20:04:38.366777  217656 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:04:38.367079  217656 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-5703/.minikube/bin
	I1212 20:04:38.367695  217656 out.go:368] Setting JSON to false
	I1212 20:04:38.369426  217656 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2825,"bootTime":1765567053,"procs":376,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 20:04:38.369485  217656 start.go:143] virtualization: kvm guest
	I1212 20:04:38.371351  217656 out.go:179] * [false-789448] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 20:04:38.372434  217656 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:04:38.372451  217656 notify.go:221] Checking for updates...
	I1212 20:04:38.374348  217656 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:04:38.375438  217656 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-5703/kubeconfig
	I1212 20:04:38.376485  217656 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-5703/.minikube
	I1212 20:04:38.377564  217656 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 20:04:38.378594  217656 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:04:38.380948  217656 config.go:182] Loaded profile config "NoKubernetes-562130": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1212 20:04:38.381080  217656 config.go:182] Loaded profile config "missing-upgrade-551899": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1212 20:04:38.381233  217656 config.go:182] Loaded profile config "running-upgrade-569692": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1212 20:04:38.381392  217656 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:04:38.407937  217656 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1212 20:04:38.408061  217656 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:04:38.465998  217656 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-12 20:04:38.456647241 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 20:04:38.466114  217656 docker.go:319] overlay module found
	I1212 20:04:38.468166  217656 out.go:179] * Using the docker driver based on user configuration
	I1212 20:04:38.469311  217656 start.go:309] selected driver: docker
	I1212 20:04:38.469330  217656 start.go:927] validating driver "docker" against <nil>
	I1212 20:04:38.469345  217656 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:04:38.471018  217656 out.go:203] 
	W1212 20:04:38.472053  217656 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1212 20:04:38.473097  217656 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-789448 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-789448

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-789448

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-789448

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-789448

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-789448

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-789448

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-789448

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-789448

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-789448

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-789448

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789448"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789448"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789448"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-789448

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789448"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789448"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-789448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-789448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-789448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-789448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-789448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-789448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-789448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-789448" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789448"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789448"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789448"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789448"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789448"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-789448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-789448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-789448" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789448"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789448"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789448"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789448"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789448"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22112-5703/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 12 Dec 2025 20:04:37 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: missing-upgrade-551899
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22112-5703/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 12 Dec 2025 20:03:51 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: running-upgrade-569692
contexts:
- context:
cluster: missing-upgrade-551899
extensions:
- extension:
last-update: Fri, 12 Dec 2025 20:04:37 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: missing-upgrade-551899
name: missing-upgrade-551899
- context:
cluster: running-upgrade-569692
user: running-upgrade-569692
name: running-upgrade-569692
current-context: missing-upgrade-551899
kind: Config
users:
- name: missing-upgrade-551899
user:
client-certificate: /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/missing-upgrade-551899/client.crt
client-key: /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/missing-upgrade-551899/client.key
- name: running-upgrade-569692
user:
client-certificate: /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/running-upgrade-569692/client.crt
client-key: /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/running-upgrade-569692/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-789448

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789448"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789448"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789448"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789448"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789448"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789448"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789448"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789448"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789448"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789448"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789448"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789448"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789448"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789448"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789448"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789448"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789448"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-789448"

                                                
                                                
----------------------- debugLogs end: false-789448 [took: 3.124053919s] --------------------------------
helpers_test.go:176: Cleaning up "false-789448" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-789448
--- PASS: TestNetworkPlugins/group/false (3.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-562130
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-562130: (1.283709282s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestPause/serial/Start (41.73s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-243084 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-243084 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (41.72543193s)
--- PASS: TestPause/serial/Start (41.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-562130 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-562130 --driver=docker  --container-runtime=crio: (6.369714263s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-562130 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-562130 "sudo systemctl is-active --quiet service kubelet": exit status 1 (266.396085ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.17s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-243084 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-243084 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.161496702s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.74s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.74s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (282.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.1981622570 start -p stopped-upgrade-180826 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.1981622570 start -p stopped-upgrade-180826 --memory=3072 --vm-driver=docker  --container-runtime=crio: (21.370380379s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.1981622570 -p stopped-upgrade-180826 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.1981622570 -p stopped-upgrade-180826 stop: (1.847176931s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-180826 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-180826 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m19.5691322s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (282.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (50.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-824670 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-824670 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (50.107291852s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (50.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (45.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-753103 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-753103 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (45.184780701s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (45.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (7.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-824670 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [6c5ea4b4-8ab0-4bd9-ac11-07892e94a6d2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [6c5ea4b4-8ab0-4bd9-ac11-07892e94a6d2] Running
E1212 20:09:14.514152    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 7.003914477s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-824670 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (7.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-753103 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [3b9946fe-7d9a-4087-960d-57c19ff595d9] Pending
helpers_test.go:353: "busybox" [3b9946fe-7d9a-4087-960d-57c19ff595d9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [3b9946fe-7d9a-4087-960d-57c19ff595d9] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003804039s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-753103 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (15.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-824670 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-824670 --alsologtostderr -v=3: (15.930710839s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (15.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (18.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-753103 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-753103 --alsologtostderr -v=3: (18.09037079s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (18.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-824670 -n old-k8s-version-824670
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-824670 -n old-k8s-version-824670: exit status 7 (74.873212ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-824670 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (45.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-824670 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-824670 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (44.811766783s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-824670 -n old-k8s-version-824670
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (45.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-753103 -n no-preload-753103
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-753103 -n no-preload-753103: exit status 7 (79.738308ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-753103 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (48.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-753103 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1212 20:10:00.417140    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/functional-853944/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-753103 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (48.246084168s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-753103 -n no-preload-753103
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (48.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-8xmbb" [85485575-d55f-4968-9740-35c3df94662b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003286922s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-8xmbb" [85485575-d55f-4968-9740-35c3df94662b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004295867s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-824670 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.03s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-180826
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-180826: (1.025956642s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (41.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-433034 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-433034 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (41.673348778s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (41.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-824670 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-7c9ms" [1bb4497b-d848-4f7c-ba73-e9d7e094026a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003669628s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-7c9ms" [1bb4497b-d848-4f7c-ba73-e9d7e094026a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003363888s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-753103 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (27.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-832562 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-832562 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (27.974927664s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (27.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (45.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-399565 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-399565 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (45.526251628s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (45.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-753103 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (43.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-789448 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-789448 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (43.071904178s)
--- PASS: TestNetworkPlugins/group/auto/Start (43.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-832562 --alsologtostderr -v=3
E1212 20:11:11.446053    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/addons-410014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-832562 --alsologtostderr -v=3: (2.506574124s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.51s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-832562 -n newest-cni-832562
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-832562 -n newest-cni-832562: exit status 7 (82.052822ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-832562 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-832562 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-832562 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (9.961023599s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-832562 -n newest-cni-832562
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (10.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-433034 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [4c0c6390-93fc-431e-ab56-29f5ec5d45ba] Pending
helpers_test.go:353: "busybox" [4c0c6390-93fc-431e-ab56-29f5ec5d45ba] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [4c0c6390-93fc-431e-ab56-29f5ec5d45ba] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.006541999s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-433034 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-832562 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (16.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-433034 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-433034 --alsologtostderr -v=3: (16.572794086s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (16.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-399565 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [2b73ee4b-c108-4ada-b144-9eb629cde278] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [2b73ee4b-c108-4ada-b144-9eb629cde278] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003827191s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-399565 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (42.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-789448 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-789448 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (42.402559523s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (42.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (16.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-399565 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-399565 --alsologtostderr -v=3: (16.357708781s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (16.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-789448 "pgrep -a kubelet"
I1212 20:11:38.533727    9254 config.go:182] Loaded profile config "auto-789448": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-789448 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-djqrp" [3c5cdeae-dc95-4796-bef0-c92d99dd1970] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-djqrp" [3c5cdeae-dc95-4796-bef0-c92d99dd1970] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003330312s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-433034 -n default-k8s-diff-port-433034
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-433034 -n default-k8s-diff-port-433034: exit status 7 (95.27874ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-433034 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (46.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-433034 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-433034 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (45.682718047s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-433034 -n default-k8s-diff-port-433034
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (46.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-789448 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-789448 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-789448 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-399565 -n embed-certs-399565
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-399565 -n embed-certs-399565: exit status 7 (108.085053ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-399565 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (46.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-399565 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-399565 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (45.761265201s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-399565 -n embed-certs-399565
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (46.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (52.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-789448 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-789448 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (52.216138103s)
--- PASS: TestNetworkPlugins/group/calico/Start (52.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-jr4mb" [2e3f7b03-aedd-4ac5-a470-92c922e7facf] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004133185s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-789448 "pgrep -a kubelet"
I1212 20:12:20.137512    9254 config.go:182] Loaded profile config "kindnet-789448": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-789448 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-lph55" [bff6d3ad-e4e4-4af7-a060-da9e6bca031b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-lph55" [bff6d3ad-e4e4-4af7-a060-da9e6bca031b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003570378s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-nc8xd" [32a006b0-148e-4def-9968-32c4eaafd9de] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003533896s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-789448 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-789448 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-789448 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-nc8xd" [32a006b0-148e-4def-9968-32c4eaafd9de] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003427502s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-433034 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-433034 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-hwvvn" [764cbf67-466b-495a-a5d8-bf8234eb5da2] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005006306s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-hwvvn" [764cbf67-466b-495a-a5d8-bf8234eb5da2] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00293521s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-399565 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (52.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-789448 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-789448 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (52.775936155s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (52.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (68.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-789448 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-789448 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m8.138090458s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (68.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-399565 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-hd58q" [b41b3fa4-ac0d-4f46-9ac4-c0795974470f] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:353: "calico-node-hd58q" [b41b3fa4-ac0d-4f46-9ac4-c0795974470f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00419629s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (40.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-789448 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-789448 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (40.480654245s)
--- PASS: TestNetworkPlugins/group/bridge/Start (40.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-789448 "pgrep -a kubelet"
I1212 20:13:07.087804    9254 config.go:182] Loaded profile config "calico-789448": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-789448 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-8xxpt" [b1f23c6b-ecd0-4435-b5e9-e087285d70ca] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-8xxpt" [b1f23c6b-ecd0-4435-b5e9-e087285d70ca] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.007292602s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-789448 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-789448 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-789448 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (47.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-789448 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-789448 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (47.525839142s)
--- PASS: TestNetworkPlugins/group/flannel/Start (47.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-789448 "pgrep -a kubelet"
I1212 20:13:41.264971    9254 config.go:182] Loaded profile config "custom-flannel-789448": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-789448 replace --force -f testdata/netcat-deployment.yaml
I1212 20:13:41.839803    9254 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1212 20:13:41.948928    9254 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-qhj87" [7fe2dda1-0210-4404-ac7a-fbe9d9243554] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-qhj87" [7fe2dda1-0210-4404-ac7a-fbe9d9243554] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004091017s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-789448 "pgrep -a kubelet"
I1212 20:13:43.468553    9254 config.go:182] Loaded profile config "bridge-789448": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-789448 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-77jhq" [e4371695-fab9-44dc-8361-c1b6606f1173] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-77jhq" [e4371695-fab9-44dc-8361-c1b6606f1173] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.00379015s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-789448 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-789448 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-789448 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-789448 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-789448 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-789448 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-789448 "pgrep -a kubelet"
I1212 20:13:59.699835    9254 config.go:182] Loaded profile config "enable-default-cni-789448": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-789448 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-ktrhr" [1ed7d6a2-c06c-475f-b9d7-8ec8b6946f53] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-ktrhr" [1ed7d6a2-c06c-475f-b9d7-8ec8b6946f53] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004175425s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-789448 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-789448 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-789448 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1212 20:14:09.183046    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:14:09.189468    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:14:09.202253    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:14:09.223701    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:14:09.265109    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/old-k8s-version-824670/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-wkgq4" [0982ba0d-d104-454f-8b8e-81a0c77ff1ff] Running
E1212 20:14:26.660903    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.002956173s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-789448 "pgrep -a kubelet"
I1212 20:14:31.433545    9254 config.go:182] Loaded profile config "flannel-789448": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-789448 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-mr4xc" [03f03c77-5899-4afc-af51-0bfe27ec7850] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-mr4xc" [03f03c77-5899-4afc-af51-0bfe27ec7850] Running
E1212 20:14:36.902441    9254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/no-preload-753103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003329304s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-789448 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-789448 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-789448 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.08s)

                                                
                                    

Test skip (34/415)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
150 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
151 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
152 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
262 TestGvisorAddon 0
284 TestImageBuild 0
285 TestISOImage 0
349 TestChangeNoneUser 0
352 TestScheduledStopWindows 0
354 TestSkaffold 0
370 TestStartStop/group/disable-driver-mounts 0.19
378 TestNetworkPlugins/group/kubenet 3.42
386 TestNetworkPlugins/group/cilium 5.02
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:765: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-044739" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-044739
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-789448 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-789448

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-789448

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-789448

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-789448

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-789448

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-789448

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-789448

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-789448

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-789448

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-789448

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789448"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789448"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789448"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-789448

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789448"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789448"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-789448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-789448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-789448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-789448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-789448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-789448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-789448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-789448" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789448"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789448"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789448"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789448"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789448"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-789448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-789448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-789448" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789448"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789448"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789448"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789448"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789448"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22112-5703/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 12 Dec 2025 20:03:47 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.103.2:8443
name: missing-upgrade-551899
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22112-5703/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 12 Dec 2025 20:03:51 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: running-upgrade-569692
contexts:
- context:
cluster: missing-upgrade-551899
extensions:
- extension:
last-update: Fri, 12 Dec 2025 20:03:47 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: missing-upgrade-551899
name: missing-upgrade-551899
- context:
cluster: running-upgrade-569692
user: running-upgrade-569692
name: running-upgrade-569692
current-context: ""
kind: Config
users:
- name: missing-upgrade-551899
user:
client-certificate: /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/missing-upgrade-551899/client.crt
client-key: /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/missing-upgrade-551899/client.key
- name: running-upgrade-569692
user:
client-certificate: /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/running-upgrade-569692/client.crt
client-key: /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/running-upgrade-569692/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-789448

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789448"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789448"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789448"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789448"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789448"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789448"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789448"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789448"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789448"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789448"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789448"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789448"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789448"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789448"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789448"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789448"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789448"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-789448"

                                                
                                                
----------------------- debugLogs end: kubenet-789448 [took: 3.254981727s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-789448" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-789448
--- SKIP: TestNetworkPlugins/group/kubenet (3.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-789448 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-789448

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-789448

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-789448

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-789448

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-789448

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-789448

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-789448

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-789448

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-789448

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-789448

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789448"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789448"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789448"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-789448

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789448"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789448"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-789448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-789448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-789448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-789448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-789448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-789448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-789448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-789448" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789448"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789448"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789448"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789448"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789448"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-789448

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-789448

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-789448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-789448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-789448

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-789448

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-789448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-789448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-789448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-789448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-789448" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789448"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789448"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789448"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789448"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789448"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22112-5703/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 12 Dec 2025 20:03:51 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: running-upgrade-569692
contexts:
- context:
cluster: running-upgrade-569692
user: running-upgrade-569692
name: running-upgrade-569692
current-context: ""
kind: Config
users:
- name: running-upgrade-569692
user:
client-certificate: /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/running-upgrade-569692/client.crt
client-key: /home/jenkins/minikube-integration/22112-5703/.minikube/profiles/running-upgrade-569692/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-789448

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789448"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789448"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789448"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789448"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789448"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789448"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789448"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789448"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789448"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789448"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789448"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789448"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789448"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789448"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789448"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789448"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789448"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-789448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-789448"

                                                
                                                
----------------------- debugLogs end: cilium-789448 [took: 4.845287046s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-789448" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-789448
--- SKIP: TestNetworkPlugins/group/cilium (5.02s)

                                                
                                    
Copied to clipboard